12 12

About This Club

Dedicated section for talk & support for the Helios4 and Helios64 open source NAS. Lead and moderated by the Kobol Team.
  1. What's new in this club
  2. I did some more troubleshooting. The +5V HDD Rail A is present when no hard drives are connected, but drops to 0.8 V when the the drives are connected. It looks like the rail is enabled by a TDM3421 P-channel MOSFET? Measuring the MOSFET with no load: Gate = 3V, Source = 5V, Drain = 5V. With the rail loaded: Gate = 3V, Source = 5V, Drain = 0.85V Is it implemented as a high-side switch? Any plans of releasing the schematic for the Helios64? I'm no electrical engineer but I suspect the MOSFET is faulty?
  3. Thanks @Zageron, my account's been "activated"! Kobol folks, if it helps, I see the following when plugging/unplugging the USB cable onto a Debian PC: XXXXX@deadshot:~ $ lsusb -t /: Bus 04.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/6p, 5000M |__ Port 5: Dev 2, If 0, Class=Mass Storage, Driver=usb-storage, 5000M /: Bus 03.Port 1: Dev 1, Class=root_hub, Driver=ehci-pci/3p, 480M |__ Port 1: Dev 2, If 0, Class=Hub, Driver=hub/8p, 480M /: Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/15p, 480M /: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=ehci-pci/3p
  4. Yes, once you have everything installed, you need to enable zfs-import-cache, zsf-import.target, zfs-mount, zfs.target, zfs-zed.
  5. I've gotten ZFS working pretty well following the compile instructions on their website. I just put everything in a chroot to make it cleaner. # Fist, install a few things we're going to need: sudo apt install debootstrap # Now, create our chroot mkdir -p chroot-zfs # This will take a while sudo debootstrap --variant=buildd focal chroot-zfs # mount proc, sys, and dev sudo mount -t proc /proc chroot-zfs/proc sudo mount --rbind /sys chroot-zfs/sys sudo mount --rbind /dev chroot-zfs/dev # Copy the files we need for apt. The sources.list is missing many things we'll need. cp /etc/apt/sourc
  6. Can you try mdadm --run /dev/md0 to reactivate the array Not sure why your array is deactivate though. Maybe some settings were saved up in OMV. Then you can check again the status of the array with mdadm -D /dev/md0 and cat /proc/mdstat
  7. mdadm -D /dev/md0 /dev/md0: Version : 1.2 Raid Level : raid0 Total Devices : 5 Persistence : Superblock is persistent State : inactive Working Devices : 5 Name : helios64:0 (local to host helios64) UUID : f2188013:e3cdc6dd:c8a55f0d:d0e9c602 Events : 26341 Number Major Minor RaidDevice - 8 64 - /dev/sde - 8 32 - /dev/sdc - 8 0 - /dev/sda - 8
  8. Upgraded today with no issues to be seen as of yet. Got plex and jellyfin in docker. It did for some reason change my encrypted device from /dev/md127 to /dev/md0. But that was about it.
  9. JohnnyMnemonic you can either wait 24 hours now that you have a like, or you can join the irc channel and ask to be activated.
  10. Thanks for the quick response, @aprayoga! I gave it a shot and unfortunately it doesn't seem to work for me. I've upgraded to Armbian 20.11, which includes LK 5.9: XXXXX@helios64:~ $ uname -r 5.9.10-rockchip64 XXXXX@helios64:~ $ cat /etc/os-release PRETTY_NAME="Armbian 20.11 Buster" NAME="Debian GNU/Linux" VERSION_ID="10" VERSION="10 (buster)" VERSION_CODENAME=buster ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/" I've created that file and applied the user overlay a
  11. @gprovost The files are being shared via netatalk to a Mac. That worked fine until now. The number of files within a single folder is quite large (up to 13,000 files). The hard drive is formatted with ext4 using the DIR_INDEX option to speed up the file access. The problem first appeared after switching from one of the four drives from 2 TB to 4 TB. I've copied the files from A to B, so the number of files did not change. But I had to reindex all files within a picture processing software on the Mac (creating thumbs, determine picture properties, etc.). I've
  12. I checked the +12V and +5V on the 15-pin power connector directly (that the hard drives plug into). All measure fine, but of course that is without a load connected. In my case only the two hard drives connected to 12V HDD Rail B start up. Something wrong with the power staggering approach in bootloader? dmesg output immediately after booting: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) ata3: SATA link down (SStatus 0 SControl 300) ata4: SATA link down (SStatus 0 SControl 300)
  13. http://ix.io/2FHd This is the machine in the broken state. (Requiring reboot)
  14. I upgraded and it's working fine for me. I have configures iSCSI and replaced about 6 virtual machines by moving their services into the Helios64. I have to say that PostgreSQL 13 runs very well with the loads being spread across all cores on the system. Now I just have figure out how to reconfigure the UniFi controller to run either as a docker image or as another service (leaning towards a container at this point).
  15. Yeah that is the reason why it was not removed until now. No one complained. Armbian is past LK4.19 for a longer while (hell there even was a complete 5.4 release) and no one seems to have any issues. Igor, gprovost and myself all use Helios4 / clearfogpro on a daily basis (as NAS or whatever) and do not have / are unable to reproduce these problems... I think there are just many specific factors under which the DFS stuff causes problems.
  16. FWIW Although it seems to be evidence that the DFS code is the cause for the spontaneous reboots, I want to inform you that my installation (see https://forum.armbian.com/topic/16038-random-system-reboots/?do=findComment&comment=113510) is running now more than two days after upgrade to 5.8.18-mvebu.
  17. Some of you may be interested to not only store your data on Helios64 but to also run other services on it - such as home automation. One of the popular platforms for home automation in Europe - and in Germany in particular - is "Homematic". They opened up their ecosystem some time ago and allow others to implement their own components (some excellent examples from Jérôme are here). As of yesterday - with the latest commit - it is possible to run a virtualised "ccu3" base station for that platform, called pivccu, with support for a detached antenna module (RPI-RF-MOD) coupled to H
  18. This is actually really smart. @gprovost could something along these lines be added to the default setup?
  19. If you want the fans of your Helios64 to start spinning earlier, look no further. This will allow the fans to spin at a constant speed from the earliest stage of initrd until fancontrol is started by the system. My use-case is for full disk encryption and recovery in initramfs, sometimes I can have the machine powered on for quite a while before fancontrol starts. Or, if the boot process encounters an error the fans may never start, this prevents that. Step 1: Tell initramfs to include the pwm-fan module echo pwm-fan | sudo tee -a /etc/initramfs-tools/modules Step 2:
  20. Sounds great, thanks in advance, I am looking forward to having a stable Helios4 again.
  21. Once the above PR has been merged and a bugfix release has been done. We can inform you then.
  22. How does it work or better when will it be possible to run apt-get upgrade to update to a 5.18.* kernel without the problematic DFS patches?
  23. @gprovostThanks for the hint. You are right, the Helios4 one does 12V/8A and the Synology one only 12V/6A. I will switch them back, @MangixYou were also right, raid check finished successfully running kernel 4.19.63-mvebu #5.91 (with over 36 hours of uptime at the moment) I saw in the other thread the system freeze is related to the DFS patches, therefor this thread can be closed now. Thanks to everybody for your input and help with this problem.
  24. This is what I do as well , although I had to create a dummy zfs-dkms package to prevent it from being pulled in by zfsutils-linux.
  25. Hi, I’m using the Helios64 with Armbian 20.11 Buster ( 5.9.10-rockchip64) On every reboot the SMART counter for “Unexpected_Power_Loss (174)” / “POR_Recovery_Count (235)” incresses by one. I’ve seen this unexpected behaviour on Samsung, WD and Crusical SSDs … Rebooting the box with sync && echo 1 > /sys/block/sdX/device/delete && reboot turns the disk friendly off, but shouldn’t this be done in systemd/kernel?
  26.