Jump to content

clostro

Members
  • Posts

    30
  • Joined

Reputation Activity

  1. Like
    clostro got a reaction from tionebrr in Mystery red light on helios64   
    Just wanted to report that the CPU frequency mod has been running stable under normal use for 15 days now (on 1Gbe connection). Haven't tried the voltage mod.
     
    I'll switch to the February 4th Buster 5.10 build soon.
     
     
    edit: 23 days, and I shut it down for sd card backup and system update. cpu freq mod is rock solid.
  2. Like
    clostro got a reaction from Gareth Halfacree in Feature / Changes requests for future Helios64 board or enclosure revisions   
    @Gareth Halfacree If you would be interested in FreeBSD, @SleepWalker was experimenting with it some time ago. Maybe still is.
     
  3. Like
    clostro reacted to Gareth Halfacree in Encrypted OpenZFS performance   
    Your MicroServer has either an Opteron or Ryzen processor in it, either one of which is considerably more powerful than the Arm-based RK3399.
     
    As a quick test, I ran OpenSSL benchmarks for AES-256-CBC on my Ryzen 2700X desktop, an older N54L MicroServer, and the Helios64, block size 8129 bytes.
     
    Helios64: 68411.39kB/s.
    N54L: 127620.44kB/s.
    Desktop: 211711.31kB/s.
     
    From that, you can see the Helios64 CPU is your bottleneck: 68,411.39kB/s is about 67MB/s, or within shouting distance of your 62MB/s real-world throughput - and that's just encryption, without the LZ4 compression overhead.
     
  4. Like
    clostro reacted to allen--smithee in Very noisy fans   
    @ gprovost
    As we might expect, there is a little spread of the pads on the outer edges of the processor, but also on the edge of the radiator
    and so you were right, there is no need to leave the inductors bareheaded.
    Now I think 19x19 is enough for the dimensions of the cpu thermical pad with a 1.5mm. 
    I redid the separation between the CPU Pad and the rest of the board and put pieces of pad in the slots of the inductors. 
    You don't know if it will be possible so, nevertheless now we know what we need for our next blends at home ;).
     
    P6 and P7> PWM = 35
    P6 & P7> PWM = 50 is my new PWMMAX on Fancontrol
    P6 et P7> PWM = 255
     
     
    @SIGSEGV
    If you are interested in this solution.
    I recommend TP-VP04-C 1.5mm for your helios64 (2 plates of 90x50x1.5mm)
     
     
  5. Like
    clostro got a reaction from gprovost in Mystery red light on helios64   
    Just wanted to report that the CPU frequency mod has been running stable under normal use for 15 days now (on 1Gbe connection). Haven't tried the voltage mod.
     
    I'll switch to the February 4th Buster 5.10 build soon.
     
     
    edit: 23 days, and I shut it down for sd card backup and system update. cpu freq mod is rock solid.
  6. Like
    clostro reacted to barnumbirr in [SOLVED] Helios64 won't boot after latest update   
    SOLVED!!!

    Of course this was due to armbianEnv.txt being corrupt. Restoring the file from backups allows the device to boot cleanly again.
  7. Like
    clostro reacted to ShadowDance in SATA issue, drive resets: ataX.00: failed command: READ FPDMA QUEUED   
    Hey, sorry I haven't updated this thread until now.
     
    The Kobol team sent me, as promised, a new harness and a power-only harness so that I could do some testing:
    Cutting off capacitors from the my original harness did not make a difference The new (normal) harness had the exact same issue as the original one With the power-only harness and my own SATA cables, I was unable to reproduce the issue (even at 6 Gbps) Final test was to go to town on my original harness and cut the connector in two, this allowed me to use my own SATA cable with the original harness and there was, again, no issue (at 6 Gbps) Judging from my initial results, it would seem that there is an issue with the SATA cables in the stock harness. But I should try to do this for a longer period of time -- problem was I didn't have SATA cables for all disks, once I do I'll try to do a week long stress test. I reported my result to the Kobol team but haven't heard back yet.
     
    Even with the 3.0 Gbps limit, I still occasionally run into this issue with the original harness, has happened 2 times since I did the experiment.
     
    If someone else is willing to repeat this experiment with a good set of SATA cables, please do contact Kobol to see if they'd be willing to ship out another set of test harnesses, or perhaps they have other plans.
     
    Here's some pics of my test setup, including the mutilated connector:
     

  8. Like
    clostro reacted to gprovost in Feature / Changes requests for future Helios64 board or enclosure revisions   
    RK3399 has a single PCIe 2.1 x 4 lanes port, you can't split the lanes for multi interfaces unless you use a PCIe switch which is an additional component that would increase a lot the board cost.
     
    That's clearly the intention, we don't want either to restart form scratch the software and documentation :P
     
    Yes this will be fixed, it was a silly design mistake. We will also make the back panel bracket holder with rounder edges to avoid user to scratch their hands :/
     
    We will post soon how to manage fan speed based on HDD temperature (using hddtemp tool), that would make more sense than current approach.
    You can already find an old example : https://unix.stackexchange.com/questions/499409/adjust-fan-speed-via-fancontrol-according-to-hard-disk-temperature-hddtemp
     
  9. Like
    clostro got a reaction from gprovost in Feature / Changes requests for future Helios64 board or enclosure revisions   
    - I don't see the point of baking a m.2 or any future nvme PCI-e port on the board honestly. Instead I would love to see a PCI-e slot or two exposed. That way end user can pick what expansion is needed. It can be an nvme adapter, can be 10gbe network, or SATA or USB or even a SAS controller for additional external enclosures. It will probably be more expensive for the end user but will open up a larger market for the device. We already have 2 lanes/10gbps worth of PCIe not utilized as is (correct me on this).
    - Absolutely more/user upgradeable and ECC ram support.
    - Better trays, you guys they are a pain right now lol. ( Loving the black and purple though )
    - Some sort of active/quiet cooling on the SoC separate from the disks, as any disk activity over the network or even a simple CPU bound task ramps up all the fans immediately.
    - Most important of all, please do not stray too far from the current architecture both hardware and firmware wise. Aside from everything else, we would really love to see a rock solid launch next time around. Changing a lot of things might cause most of your hard work to-date to mean nothing.
  10. Like
    clostro reacted to Salamandar in Feature / Changes requests for future Helios64 board or enclosure revisions   
    Weird, that's not something I decided when setting up the post.
     
     
     
    Well I'll just add "support announced features" Also the 2.5GB connector is already fixed in batch 2.
     
    I have a zraid (4 HDD), never had an issue. You may need to contact Kobol on this.
     
    True, I didn't think of that. Thanks.
     
  11. Like
    clostro reacted to allen--smithee in Feature / Changes requests for future Helios64 board or enclosure revisions   
    I am very interested in purchasing a version half case size for 5 SSD/2.5" with the same raw design  
     
     
     
  12. Like
    clostro reacted to Salamandar in Feature / Changes requests for future Helios64 board or enclosure revisions   
    I added voting options, please revote !
     
    EDIT: crap… We can't edit our votes, I did not expect that
  13. Like
    clostro reacted to ebin-dev in Feature / Changes requests for future Helios64 board or enclosure revisions   
    The next thing I would buy is a drop in replacement board for the helios64 with:
    rk3588 ECC RAM  nvme PCI-e port 10GBase-T port
  14. Like
    clostro reacted to ShadowDance in Mystery red light on helios64   
    It's probably indicating a kernel panic. You could try limiting CPU speed and setting the governor as suggested in the linked topic. You could also try hooking up the serial port to another machine and monitor it for any output indicating why the crash happened.
     
  15. Like
    clostro reacted to yay in eth1 (2.5) vanished: Upgrade to 20.11.3 / 5.9.14-rockchip64   
    According to your latest blog post:
    That's awesome to hear, thank you very much for addressing this issue! Is there any way to test this change upfront? I'd be eager to give this a spin with a self-built kernel/image if necessary, but couldn't find anything concrete in the public git repositories so far.
  16. Like
    clostro reacted to Werner in No HDD1 when using M.2 SATA on SHARED port.   
    Might also be worth discovering the idea about an external addon case for further 3,5" harddrives...
    @gprovost you heard what we want: Go big or go home
  17. Like
    clostro reacted to gprovost in No HDD1 when using M.2 SATA on SHARED port.   
    @Werner Hahaha yes we are working at improving the design ;-) More news soon.
  18. Like
    clostro reacted to gprovost in No HDD1 when using M.2 SATA on SHARED port.   
    @fr33s7yl0r Updated your title to make it more friendly.
     
    If we could have made it 6x SATA port at that time, with one port dedicated to the M.2 slot, you can be sure we would have done it.
    We could have taken a different approach using a PCIe Switch, but it would have increase significantly the board.
     
    Any board design is about finding the best recipe within a budget for a list of use cases with the less tradeoffs possible.
    The M.2 SATA port on our Helios64 was an addition to support more use cases to the product, even though yes it wasn't perfect that the port has to be shared.
     
  19. Like
    clostro got a reaction from allen--smithee in No HDD1 when using M.2 SATA on SHARED port.   
    https://man7.org/linux/man-pages/man7/lvmcache.7.html
    https://linuxhint.com/configuring-zfs-cache/
    Here you go... use 4 HDDs and an SSD cache.
    Or sell your unit, quite a lot of people wanted to buy one and couldn't in time.
    OR, Frankenstein your unit and add in port multipliers to original SATA ports. Can add up to 20 HDDs to the 4 original SATA ports, and up to 5 SSDs to the remaining 1 original SATA port. The hardware controller specs say that it supports port multipliers, not sure about Armbian kernel, might have to modify.
     
    Btw, you can take a look at the new version of Odroid H2+ with port multipliers (up to 10 SATA disks, + PCI-e Nvme) if you are into more tinkering. Also you get two 2.5G network ports instead of one. Hardkernel team has a blog post about its setup and benchmarks. https://wiki.odroid.com/odroid-h2/application_note/10_sata_drives
    I am planning to expand my network infra with the H2+ soon. You can even plug a $45 4 port 2.5g switch into the Nvme slot now. I'm going crazy about this unit. If only it didn't have a shintel (r/AyyMD) cpu in.
     
    Anyhow-
    Just doing a bit of research shows that this was not a 'decision' made by Kobol, not exactly. There are just two reliable PCI-e to SATA controllers I could find that support multiple SATA ports(+4), with the limitation of RK3399 which has 4 PCI-e lanes. It would be a different story if RK had 8 lanes, but that is another can of worms that includes cpu arch, form factor, etc. Not gonna open that can while I'm barely qualified to open this one.
     
    What we have here in Helios64 is JMB585, and then the other option was Marvel 88SE9215. Marvel only supports 4 ports, while JMB supports 5. There are no controllers that work reliably with 4 PCI-e 2.1 lanes and have more than 5 ports that I could find.
    There is the quite new Asmedia ASM1166 which actually supports 6 ports, but that was probably not available during design of Helios64 as it is quite new. Not only that, there is a weird thread about its log output on Unraid forums.
     
    At the end, this '5 ports only' was not exactly a decision for Kobol. But a rather uninformed decision made by you. You don't see people here complaining about this. You are the second one who even mentioned it, and the only one who is complaining so far with CAPS no less. Which means that the specs were clear to pretty much everyone that this was the case.
     
    My suggestion is to replace one of the drives with a Samsung 860 pro, make it a SLOG/L2ARC, or in my case a LVM cache (write back mode, make sure you have a proper UPS, or the battery in the unit is connected properly) and call it a day. SATA port is faster than the 2.5G ethernet or the USB DAS mode anyhow so your cache SSD will mostly perform ok.
  20. Like
    clostro got a reaction from Werner in No HDD1 when using M.2 SATA on SHARED port.   
    https://man7.org/linux/man-pages/man7/lvmcache.7.html
    https://linuxhint.com/configuring-zfs-cache/
    Here you go... use 4 HDDs and an SSD cache.
    Or sell your unit, quite a lot of people wanted to buy one and couldn't in time.
    OR, Frankenstein your unit and add in port multipliers to original SATA ports. Can add up to 20 HDDs to the 4 original SATA ports, and up to 5 SSDs to the remaining 1 original SATA port. The hardware controller specs say that it supports port multipliers, not sure about Armbian kernel, might have to modify.
     
    Btw, you can take a look at the new version of Odroid H2+ with port multipliers (up to 10 SATA disks, + PCI-e Nvme) if you are into more tinkering. Also you get two 2.5G network ports instead of one. Hardkernel team has a blog post about its setup and benchmarks. https://wiki.odroid.com/odroid-h2/application_note/10_sata_drives
    I am planning to expand my network infra with the H2+ soon. You can even plug a $45 4 port 2.5g switch into the Nvme slot now. I'm going crazy about this unit. If only it didn't have a shintel (r/AyyMD) cpu in.
     
    Anyhow-
    Just doing a bit of research shows that this was not a 'decision' made by Kobol, not exactly. There are just two reliable PCI-e to SATA controllers I could find that support multiple SATA ports(+4), with the limitation of RK3399 which has 4 PCI-e lanes. It would be a different story if RK had 8 lanes, but that is another can of worms that includes cpu arch, form factor, etc. Not gonna open that can while I'm barely qualified to open this one.
     
    What we have here in Helios64 is JMB585, and then the other option was Marvel 88SE9215. Marvel only supports 4 ports, while JMB supports 5. There are no controllers that work reliably with 4 PCI-e 2.1 lanes and have more than 5 ports that I could find.
    There is the quite new Asmedia ASM1166 which actually supports 6 ports, but that was probably not available during design of Helios64 as it is quite new. Not only that, there is a weird thread about its log output on Unraid forums.
     
    At the end, this '5 ports only' was not exactly a decision for Kobol. But a rather uninformed decision made by you. You don't see people here complaining about this. You are the second one who even mentioned it, and the only one who is complaining so far with CAPS no less. Which means that the specs were clear to pretty much everyone that this was the case.
     
    My suggestion is to replace one of the drives with a Samsung 860 pro, make it a SLOG/L2ARC, or in my case a LVM cache (write back mode, make sure you have a proper UPS, or the battery in the unit is connected properly) and call it a day. SATA port is faster than the 2.5G ethernet or the USB DAS mode anyhow so your cache SSD will mostly perform ok.
  21. Like
    clostro reacted to Werner in No HDD1 when using M.2 SATA on SHARED port.   
    Shared lines are not uncommon even on x86 mainboards. For example: https://www.asrock.com/mb/intel/fatal1ty z97 killer/#Specification
    Take note at Storage and Slots sections:
    That is to mention as addition to @tommitytom.
    You can try to sell it in forums. I already noticed a few people who did this successfully here. However if I were a potential buyer I'd consider a person with such an attitude not necessarily trustworthy
  22. Like
    clostro reacted to tommitytom in No HDD1 when using M.2 SATA on SHARED port.   
    Seems like the mistake is yours.  It's made pretty clear that the port is shared (it's even mentioned twice on the front page of kobol.io), it's not exactly a secret.
  23. Like
    clostro reacted to m11k in eMMC endurance for /var/log   
    I haven't been able to find any specs on the eMMC module in the helios64.  I've found some posts online indicating that it is a samsung KLMAG1JETD-B041, but I haven't been able to find any info on the write endurance of this module (not sure if it is standard for eMMC vendors to provide that information like SSD vendors).
     
    I have been able to run `mmc extcsd read /dev/mmcblk1`, which shows that the emmc life time estimation is between 0 and 10%, but since this value is in 10% increments, its not very helpful.
     
    I've had issues with the default ramlog-based /var/log, so I had to turn it off.  I've increased the size to 250MB but it regularly fills up because the 'sysstat', 'pcp' (for cockpit-pcp), and 'atop' packages all write persistent logs into /var/log, which 'armbian-truncate-logs' doesn't clean up.  I could probably update the armbian-truncate-logs script to support these tools if that was the only issue.  However I've also enabled the systemd persistent journal, and although I can see that armbian-truncate-logs is calling journalctl, I still get messages about corrupt journals.  Also `journalctl` only reads logs from /var/log/journal, not /var/log.hdd/journal, so it is only able to show the current day's logs, which isn't very useful.
     
    So I'm trying to determine if I should be safe to have /var/log on disk without the ramlog overlay.  Any recommendations?  I do have my root filesystem on btrfs, and /var/log in a separate subvolume.  Maybe there's some way to tweak how frequently to sync /var/log to disk, but I don't know of any mechanism off the top of my head.
     
    Thanks,
    Mike
  24. Like
    clostro reacted to aprayoga in eth1 (2.5) vanished: Upgrade to 20.11.3 / 5.9.14-rockchip64   
    We are working on this issue.
    The same Realtek driver working fine on an Ubuntu laptop (amd64), Helios4 (armhf), and Helios64 LK 4.4.
    We are looking into USB host controller driver, it seems there are some quirks not implemented on mainline kernel.
  25. Like
    clostro reacted to yay in eth1 (2.5) vanished: Upgrade to 20.11.3 / 5.9.14-rockchip64   
    Yes, it's off. "ethtool -K eth1 tx off" produced no output and it gave up just a few seconds after the transfer started. Forcing tx offload on and off again (should we really trust the driver to report the current setting?) didn't improve things. Yesterday I fiddled around with disabling autosuspend on the usb ports and devices in /sys in case it's some weird issue like that - no improvement.
     
    However, I've found that changing the advertised speeds for autonegotiation on my desktop's side (removing the bit for 2.5Gbps, for example) causes the link to go down for a few seconds and then back up - that's picked up by the Helios64's eth1 and it goes back to a stable 1Gbps connection. So at least eth1 can be resurrected without a full NAS reboot in a few seconds' time.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines