11 11

About This Club

Dedicated section for talk & support for the Helios4 and Helios64 open source NAS. Lead and moderated by the Kobol Team.
  1. What's new in this club
  2. I have the same issue. dmesg ( I have triggered the rescan with the commnad `echo "- - -" > /sys/class/scsi_host/host0/scan`) armbian: 20.08.21 fully upgraded (with firmware update) I also have installed openmediavault.
  3. Amen! I have been following this forum with great interest and suspect it's only a matter of time until I buy one of these devices (or maybe wait for ECC one). Thanks to everyone testing and contributing feedback toward getting these devices stable, I for one certainly appreciate it (I am sure others do/will as well).
  4. Can't you simply share a ZFS dataset via SMB? I am almost positive you can do so via NFS (which is the GNU/Linux native equivalent).
  5. Hi. I'm having the problem that I don't know what your question is. I think others will have the same. Could you please formulate your question? And maybe also give the output of armbianmonitor -u
  6. ❯ sudo iperf3 -s ----------------------------------------------------------- Server listening on 5201 ----------------------------------------------------------- Accepted connection from 192.167.0.1, port 23497 [ 5] local 192.167.0.2 port 5201 connected to 192.167.0.1 port 23498 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 163 MBytes 1.37 Gbits/sec [ 5] 1.00-2.00 sec 149 MBytes 1.25 Gbits/sec [ 5] 2.00-3.00 sec 104 MBytes 868 Mbits/sec [ 5] 3.00-4.00 sec 141 MBytes 1.19 Gbits/sec [ 5] 4.00-5.00 sec 174 MBytes 1.46 Gbits/sec [ 5]
  7. Did more testing over the weekend on 5.9.9. I was able to benchmark with FIO on top of a ZFS dataset for hours with the load average hitting 10+ while scrubing the datastore. No issues. Right nowt he uptime is 3 days. I'm actually a little surprised at the performance. It's very decent for what it is. I wonder if the fact that I'm running ZFS and 5.9.9 while others are using mdadm and 5.8 is the difference. I'm not really planning on going backwards on either. If 5.9.9 works then no need to build another kernel, and you would have to pry ZFS out of my cold dead
  8. Bought a new micro SD card and put https://dl.armbian.com/helios4/archive/Armbian_20.08.13_Helios4_buster_current_5.8.16.img.xz on it. Booted, set up the root password and directly started a raid check (9.4% done so far). I am very excited to see what happens.
  9. Okay is there an easy way to install? Then I might try it. I believe you on the kernel issue. Just thinking it might be a user space triggering a kernel issue issue.
  10. I ordered a replacement PSU which was delivered and put in operation this morning. I'll give feedback in any case (spontaneous reboot or continous operation).
  11. Where did you get this new PSU ?
  12. Have you tried with Kernel 4.4 ? For me it still looks like issue related to TX checksum offload not working properly on LK5.x I cannot reproduce the issue on my setups. Can you post some iperf output of the USB Startech 1Gbps issue. When doing the iperf test with the USB Startech 1Gbps please insure the 2.5GbE is disconnected, just to be sure traffic is transiting only through the Startech adapter.
  13. Hmmm could be a manufacturing mistake on the harness cable. You can contact us to support@kobol.io in order to request a harness replacement.
  14. Keep us updated of the result once you manage to downgrade the Barracuda SSD 120 to SATA 3.1 specs. Here the link you are referring to, that might be useful for others : https://www.seagate.com/sg/en/support/kb/why-does-my-computer-with-a-barracuda-ssd-sometimes-say-no-bootable-device-detected-during-startup/ Unfortunately cannot find such tool for Linux. Yes we ware of such issue, we will fix that in next iteration of the enclosure.
  15. @Protxch the error led should not turned on, since it's software controlled. Make sure the front panel does not touch the casing
  16. 5.8.x had been running fine on my device for about 9 and a half days then randomly crashed. No logs seem to have survived the crash so this is going to be nearly impossible to debug.
  17. I am almost certain you will run into the same problem on a X86 machine. Armbian is 100% compatible on userland level and OMV is a bulky suite which dramatically changes OS ... you can see OMV forums how to deal with those problems. IMO, to be on a safe side, you will need to use the power of Docker. Armbian fully supports it for years.
  18. I use qbittorrent ina docker container. Easily reboots the Helios4. Again, it's a kernel issue. .66 is the last one that does not reboot. 8 hours uptime so far. With all future kernels, I can barely get 2 hours.
  19. I once had openmediavault (nas software) and citadel-server (groupware software) running together on an x86 computer with Debian operating system. When I try this on Helios64 running armbian (Debian 10 Buster) and kernel 5.8, then the package manager does not allow it. That means, when I start with an installed openmediavault and want then to install citadel-server as well the package manager removes openmediavault first. Does anybody have an idea or a solution to the problem?
  20. I guess in your cases the system also freezes and the watchdog service is rebooting the system then. I am going to setup the system from scratch using https://dl.armbian.com/helios4/archive/Armbian_20.08.13_Helios4_buster_current_5.8.16.img.xz tomorrow.
  21. @jbergler 5.8.x & 5.9.x are working here as well, but I'm not using ZFS, just plain vanilla mdadm RAID and LVM2 formatted as XFS. If you have an extra set of HDDs could you try building a new data pool with mdamd or LVM2 to test your setup? Since you're getting memory related errors, is there a way for you to run a memory test on your board? Have you checked if the heatsink is seated properly over the components of the board?
  22. @Mangix, try 5.4.66-mvebu #20.08.3 , I also had random reboots after some updates on heavy NFS loads again, previously I had this problem, but after reinstalling system from scratch to spare sd card system was rock solid. I inserted spare card again and it's stable again, but I have no time to test where the problem is now thought.
  23. 5.9.9 with armbian patches is working well for me so far. I've scrubbed the pool 5-6 times as well as syncoid from my hypervisor every hour for the last two days. I'm mostly just looking for a stable backup system that supports ZFS and it looks like this will work.
  24. Hey, I am not sure if it is possible to select a kernelbranch by commit. Maybe someone else knows? May I ask what else you have installed, or if you have tried with a clean image? If there is no private data/configuration on you Helios4, you might also upload a image of your sd card so we can try with your config. I have been running a helios4 for nearly 2 days now, with no crashes. Transfering some data (~400GiB) with samba. Welcome to Armbian 20.11.0-trunk Buster with Linux 5.8.18-mvebu No end-user support: built from trunk System load: 12%
  25. @Protxch According to the documentation - SATA1 & the M.2 slot are mutually exclusive. Try connecting the HDDs to SATA 2 and 3 and restart your Helios64 as @Werner mentions, check that the board boots with only the microSD card connected (No HDDs or M.2 cards connected) to rule out an SD card gone bad during the flashing procedure.
  26. This might be a stupid question but did you insert a suitable prepared sd card? into the board?
  27.