14 14

About This Club

Dedicated section for talk & support for the Helios4 and Helios64 open source NAS. Lead and moderated by the Kobol Team.
  1. What's new in this club
  2. @Mydogboris Does Red LED (System Error) on the front panel blinking?
  3. Hello, I installed OMV on my Helios64 and created a share. The device stays online and works well for hours and then at some point eth0 goes offline and no longer communicates. The only way to restore connectivity is with a reboot. I looked at the logs and dont see anything jumping out at me as to why this happens but I could very easily be missing something or the log didnt capture the event. I have not yet tried to console in through the USBc port when this occurs but will try next time. Has anyone else seen this issue or fixed it?
  4. Thanks for your replies! For some reason this forum rate limits me down to one message a day. Is there a way to lift this limitation? I see now that many people recommend against SSH. I think it's a combination of implementation drawbacks (because strong hardware also has poor SSH speed) and weak hardware (because I still can reach higher speeds with two x86 machines). Opportunities to try are CESA (although cryptodev looks highly experimental and requires manual selection of ciphers, and the numbers in the linked thread are still far from perfect) and HPN-SSH (some patchset that I
  5. Yeah, I have Ubuntu running as a Console connection to it. Ill give that a shot and post the logs when I get a chance. Thanks.
  6. @Lore, do you have other Linux system ? please modify /boot/armbianEnv.txt on the micro SD card and add/modify following lines: verbosity=7 console=serial extraargs=earlyprintk ignore_loglevel It should make the serial console more verbose
  7. The design is awesome so I'd definitely not change it! I would have liked to have WoL working, that is an important feature for those who don't plan on having it on 24/7. (I know, a NAS is intended to be on 24/7, but to have something more environmental friendly, the ability to turn it off and on without physically touching the device is a must) Other than that, I'd also add the option to disable or at least deem the front leds (can't place it in a visible place it near the TV). and ECC ram would also be very welcome.
  8. @qtmax You can refer to our wiki page on CESA engine to accelerate crypto : https://wiki.kobol.io/helios4/cesa/ There is also an old thread on that topic : Unfortunately it might not be of big help because as stated on our wiki here by default you cannot use CESA engine for OpenSSH for Debian Buster while it is possible under Debian Stretch. However it seems possible if you recompile OpenSSH to disable sandboxing. https://forum.doozan.com/read.php?2,89514 The above forum link, which is more up to date than our wiki, give some instruction on how
  9. Id focus on tuning your means of transfer to your helios4 rather disk tunning. SSH has never been good for raw throughput. You can tune ciphers and compression to get better performance. Native rsync rather than rsync over ssh is also more performant. With rsync there is also tuning opportunities. Given 4 drives. I recommed against raid6. Id choose raid5 for capacity or raid10 if you really intend to have a lot of concurrent disk IO. What type of data are you trying to protect. (If just large video files standard advise is to f
  10. If these two things were added I would definitely get another one would be: - a transceiver / SFP+ port with onboard wiring supporting 10gbe and/or 25gbe. (10gbe ports are too big / expensive / hot for them to work cheaply on a board like this.) - A dedicated lane for the m2 SSD, or replace it with a dedicated 4x nvme slot. The machine works just fine as is otherwise, especially with the new 2.5gbe fixes. (It bodes well for the remainder of the unfixed issued. :)) (Having two would mean I could mirror the enclosure to an offsite location, though that's be pric
  11. Agree, but on 3000+ USD NAS, things ain't much better Client is high-end workstation running Mint, 10Gb connection ... 138.4MB/s 224.29MB/s ... while NFS transfer speed is normal: IMO hardware performance is just fine.
  12. I have a Helios4 (2nd batch) with 4 HDDs, and before I started using it, I decided to test its performance on the planned workload. However, I got quite not satisfied with the numbers. First, the SSH speed is about 30 MB/s, while the CPU core where sshd runs is at 100% (so I suppose it's the bottleneck). I create a sparse file and copy it over SSH: helios4# dd if=/dev/null bs=1M seek=2000 of=/tmp/blank With scp I get up to 31.2 MB/s: pc# scp root@helios4:/tmp/blank /tmp With rsync I get pretty much the same numbers: pc# rm /tmp/blank pc# rsync -e ssh --progress root
  13. I had this for the second time now in a week, captured the kernel panic this time via serial so thought I'd post it. (Note I only have governor set to performance, but have not limited CPU speed to lower.) [895638.308515] Unable to handle kernel paging request at virtual address fff70000f77d1ab8 [895638.309217] Mem abort info: [895638.309470] ESR = 0x96000004 [895638.309745] EC = 0x25: DABT (current EL), IL = 32 bits [895638.310216] SET = 0, FnV = 0 [895638.310491] EA = 0, S1PTW = 0 [895638.310773] Data abort info: [895638.311033] ISV = 0, ISS = 0x00000004 [895638.311375] CM =
  14. Just had another one right now. Previous one was 3 days ago. So I plugged the serial cable but the device was completely gone. No login, or any output. Also there are no logs or anything about previous boot once rebooted either. journalctl --list-boots lists only the current boot, and dmesg -E is empty. Anyhow, trying the CPU speed thing now. I hope it works, cleaning the entire SSD cache that gets dirty at every crash is worrying me.
  15. I have run into this issue multiple times and the process flow is this: Flash image "Armbian_20.11.6_Helios64_buster_current_5.9.14.img" to a micro SD card. Run through the config (installing OMV with SAMBA) Kick over to the OMV to configure the share / media so its available on my network Runs for some time (this last install was running for maybe a few weeks before it triggered a Kernel panic) Power off. Power on. Board boots, hangs on "Starting Kernel..." Is there some trick I am missing to get this thing to actually boot / reboot correc
  16. Hello everybody, im experiencing a similar problem. Sometimes my WD RED 12 TB HDD is not recognized in that slot its in, when i change it to any other it works again. Nevertheless it seems this problem doesnt only apply to one specific slot, but is changing from time to time. EDIT: The LED of that slot doesnt turn on, yet the drive starts and runs Do anybody experience the same problem, or know any help? Thank you in advance. Best Regards.
  17. the VM dont run stable [ 36.147718] list_del corruption. prev->next should be ffff7ed2b15a4708, but was 00007ed27f654708 [ 36.151265] kernel BUG at lib/list_debug.c:53! [ 36.152900] Internal error: Oops - BUG: 0 [#1] SMP [ 36.154614] Modules linked in: cpufreq_ondemand cpufreq_conservative cpufreq_userspace cpufreq_powersave quota_v2 quota_tree dm_crypt algif_skcipher af_alg nls_ascii nls_cp437 vfat fat aes_ce_blk crypto_simd cryptd aes_ce_cipher ghash_ce gf128mul sha2_ce sha256_arm64 efi_pstore sha1_ce efivars virtio_console evdev qemu_fw_cfg dm_mod nfsd auth_rpcg
  18. Thank you for confirming. Looking forward to getting my Helios64 up and running.
  19. RK3399 has a single PCIe 2.1 x 4 lanes port, you can't split the lanes for multi interfaces unless you use a PCIe switch which is an additional component that would increase a lot the board cost. That's clearly the intention, we don't want either to restart form scratch the software and documentation :P Yes this will be fixed, it was a silly design mistake. We will also make the back panel bracket holder with rounder edges to avoid user to scratch their hands :/ We will post soon how to manage fan speed based on HDD temperature (using hddtemp tool),
  20. @yay thank you for additional testing. We will take note at the MTU value.
  21. Nvm, I found PR 2567. With those changes applied, I can cheerfully report 2.35Gbps net TCP throughput via iperf3 in either direction with an MTU of 1500 and tx offloading enabled. Again, thanks so much! One way remains to reproducibly kill the H64's eth1: set the MTU to 9124 on both ends and run iperf in both directions - each with their own server and the other acting as the client. But as the throughput difference is negligible (2.35Gbps vs 2.39Gbps, both really close to the 2.5Gbps maximum), I can safely exclude that use case for myself. I can't recall whether that even worked o
  22. Whilst i'm satisfied with my Helios64 as well, i would have liked: the original RK3399Pro as the NPU would have been useful in tensorflow for Photoprism (as a Google photos replacement). RK3588 would be even nicer but everyone is waiting on that chip more RAM, once Photoprism indexes photos, it easily runs out of memory. I limit the memory in docker now but its not ideal. Granted, there is an issue on their code so i just keep the container restarted each night NVMe for fast disk access Better HDD trays (already mentioned) the screw eyelets for the backpanel (1s
  23. I am very interested in purchasing a version half case size for 5 SSD/2.5" with the same raw design (I'll take 2 of them !) or half height backplane for the Helios64 exclusively SSD/2.5" between 5 disks in width (10cm) by 8cm in height.
  24. I added voting options, please revote ! EDIT: crap… We can't edit our votes, I did not expect that
  25. Weird, that's not something I decided when setting up the post. Well I'll just add "support announced features" Also the 2.5GB connector is already fixed in batch 2. I have a zraid (4 HDD), never had an issue. You may need to contact Kobol on this. True, I didn't think of that. Thanks.
  26.