gprovost

Members
  • Content Count

    216
  • Joined

  • Last visited


Reputation Activity

  1. Like
    gprovost reacted to Bramani in Helios4 ECC - CPU freq info - air flow   
    Apologies again, I started this thread out of eagerness and curiosity. Not to create confusion. This said, the fans should be relatively quiet once fancontrol kicked in after boot, but that depends very much on your environment. You are right though, Batch 3 fans cannot be shut off (see Wiki). I am not sure whether I understand your second question correctly, are you considering to reverse the fans (and the air flow)? Please don't. As gprovost wrote, the default direction should be the optimum for most use cases. Keeping the HDDs primarily at safe operating temperatures is what really matters in a NAS.
     
    The overview are just a few shell functions I whipped up for comparison of the two units. They have several flaws. For instance, I have used sd[a|b|c|d] names instead of proper UUIDs. The label "RPM" ist wrong, it should read "PWM" instead. And I am not sure if the temperature representation is correct. But nevertheless, here they are. Note that getCPUFreq and getCPUStats currently do not work on Buster, but on Stretch only.
     
    Add these to your non-privileged user's .bashrc and reload with "source .bashrc" or relogin. After that, simply enter "getSysStatus" on the commandline to print the overview.
    # Print current CPU frequency getCPUFreq() { local i freq for i in 0 1; do freq=$(cat /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_cur_freq) printf "%s\n" "CPU $i freq.: ${freq%???} MHz" done } # Print CPU statistics getCPUStats() { local i stats for i in 0 1; do # This works, but it needs three expensive syscalls to external commands #stats="$(cpufreq-info -c $i | grep 'stats' | sed 's/,/;/;s/:/: /g;s/\./,/g')" # Same, but reduced by one stats="$(cpufreq-info -c $i | \ awk '/stats/{gsub(",",";");gsub(":",": ");gsub("\.",",");print}')" # Cut front and end from string; this could be done in awk, too, but the # resulting expression would be long and hard to decipher for non-awk users. # Using shell internal string functions should not be that expensive, either. stats="${stats#*: }" stats="${stats% *}" # Finally, print the resulting string, nicely formatted printf "%s\n" "CPU $i stats: ${stats}" done } # Print system fans speed getFanSpeed() { local i j=3 speed for i in 10 17; do speed=$(cat /sys/devices/platform/j$i-pwm/hwmon/hwmon$j/pwm1) printf "%s\n" "Fan J$i RPM: ${speed}" ((j++)) done } # Print SoC temperature getSoCTemp() { local temp=$(cat /sys/devices/virtual/thermal/thermal_zone0/temp) printf "%s\n" "SoC core temp.: ${temp%???},${temp: -3}" } # Print ambient temperature getAmbientTemp() { local temp=$(cat /dev/thermal-board/temp1_input) printf "%s\n" "Ambient temp.: ${temp%???},${temp: -3}" } # Print temperature of all HDDs getDriveTemps() { local i temp for i in /dev/sd[abcd]; do temp=$(sudo /usr/sbin/smartctl -a $i | awk '/^194/{print $10}') printf "%s\n" "$i temp.: ${temp}" done } # Print current power mode status of all HDDs getDriveStates() { local i state for i in /dev/sd[abcd]; do state="$(sudo /sbin/hdparm -C $i)" printf "%s\n" "$i state: ${state##* }" done } # Print system status getSysStatus() { # printf "\n" # getCPUStats # printf "\n" # getCPUFreq printf "\n" getFanSpeed printf "\n" getSoCTemp getAmbientTemp printf "\n" getDriveTemps printf "\n" getDriveStates }  
  2. Like
    gprovost got a reaction from uiop in Helios4 Support   
    @uiop The 32-bit architecture limitation is actually on Linux page cache which is used by file systems, therefore limiting the max usable size of a partition or logical volume to 16TB. This doesn't stop you to have several partition (or logical volume) of <16TB.
     
    Here an example taking in consideration 4x 12TB HDD :
     
    If you use mdadm and you setup a RAID6 or RAID10 array, you will have an array of 24TB of usable space .
    You can then create 2x partition of 12TB or any other combination that would max out the array size, till a each partition doesn't exceed 16TB.
     
    If you use lvm and you setup a Volume Group (VG) with the 4 drives (PV), you will get a volume group of 48TB of usable space.
    You can then create as many (max: 255) Logical Volume (LV) to max out the VG size till each LV doesn't exceed 16TB.
     
    Actually a good approach is to use LVM on top of MDADM Raid, this way it gives flexibility to resize your partition, or i should say Logical Volume (LV).
     
    After you can of course neither use mdadm or lvm, and just setup individually each disk... till you follow the rules that you can't create a single partition bigger than 16TB, but you can create several partition.
     
    Hope it clarifies.
     
     
     
     
     
     
  3. Like
    gprovost reacted to qstaq in ZFS on Helios4   
    There are license incompatibility problems and patent risks with ZFS on linux. ZFS is CDDL licenses which risks potential patents lawsuits issues for anyone publishing ZFS code contributions with a GPL license. Thats why ZFS will not be in any mainline kernel any time soon. The CDDL license was seemingly designed by Sun to be incompatible with the GPL license. That doesnt mean you cant use ZFS on linux, you just have to use CDDL licensed ZFS to code to be granted Oracles patent protection / immunity. Debian, Ubuntu, etc get round this by shipping the ZFS module as a DKMS built out of tree module and not as part of the kernel
     
    I would suggest that there are currently too many problems with ZFS, both from a practical and legal viewpoint, to think about making ZFS a core part of Armbian. Most of Armbian's target hardware is under specified for a default setup of ZFS and proper storage management under ZFS requires much more knowledge than mdraid and ext4 or BTRFS. Its not complex, it just requires you to think and plan more at the deployment stage and have an awareness of some storage concepts that most users dont have knowledge of
     
    Now on to the positive news  ZFS is an excellent filesystem and for certain use cases its way more powerful and flexible than BTRFS or any lvm / mdraid / ext4 combination, its also pretty simple to admin once you learn the basic concepts.  Im sure there are a small portion of Armbian users that would get significant benefit from having ZFS available as a storage option. Even better news is that fundamentally every Armbian system already support ZFS at a basic level as both Debian and Ubuntu have the zfs-dkms package already available in the repos. It takes less than 5 minutes to enable ZFS support on any Armbian image just by installing the following packages with apt: zfs-dkms, zfs-initramfs & zfsutils-linux (probably zfsnap is also wanted but not required).  The problem is that the default config will definitely not be suitable for a 1-2GB RAM SBC and we would need a more appropriate default config creating. I still wouldn't recommend ZFS use for the rootfs, if you want snapshots on your rootfs then use BTRFS or use ext4 with timeshift.
     
    @Igor @gprovost I would think that the best place to implement this would be as an extra option in armbian-config -> software. I wrote a simple script to install ZFS support and to set some default options for compression, arc, cache, etc for low mem / cpu devices. The same script works without changes on Debian Buster, Ubuntu 16.04, 18.04 & 19.04 as the process is identical on all OS variants as far as I can discern
     
    Is https://github.com/armbian/config the live location for the armbian-config utility? If so I will have a look at adding a basic ZFS option in software
     
     
  4. Like
    gprovost reacted to qstaq in ZFS on Helios4   
    Armbian kernel configs (the ones that I have looked at anyway) dont have the config options to build the ZFS modules. You will have to build a new kernel or build the module out of tree
     
    Also there is a lot of misunderstanding about ZFS ram use. The main culprit of the seemingly huge ram use of ZFS is the de-duplication feature. If you disable de-duplication then you can run a simple samba / nfs file server with 4-6TB of usable storage in about 650MB RAM. I have even run simple zfs pools on OPi Zero with 512MB RAM with de-dup disabled and a couple of config tweaks using a 1TB SSD over USB for storage, though I would suggest that 750MB RAM is a more sensible minimum
     
    The key is to disable de-dup BEFORE you write any data to the pools, disabling afterwards wont lower RAM use. You can also limit the RAM used by setting zfs_arc_max to 25% of system RAM instead of the default of 50% and disable vdev cache with zfs_vdev_cache_size = 0. I have also found that setting Armbian's ZRAM swap to 25-30% of RAM instead of 50% of RAM improves performance. Obviously performance isnt going to be quite as good as a machine with 8GB RAM but the difference isnt that big either
  5. Like
    gprovost reacted to skat in Helios4: 3D printable bracket for 4 x 2.5" disks   
    Tnx :-)
    It took about 3 hours for the base, 1 hour for the top bracket. It depends on material and settings. I printed them with 0.35mm layer height and 20% infill. The black one is ABS, red one PLA. I guess PLA will do as well for the base, as there is enough cooling inside (PLA has a lower melting point and I wanted to make sure the screws dont fall out because of some heat)
     
    Maybe its best to install PrusaSlicer https://www.prusa3d.com/prusaslicer/  and load the STL files and play with settings. Then slice and export gcode, and Slicer will tell all about material needed and how long it will take to print.
  6. Like
    gprovost got a reaction from jimandroidpc in Helios4 Support   
    @SweetKick Regarding the issue of WoL not being enable at each startup, there is effectively a timing issue that seems to be new... the eth0 interface is not yet ready when helios4-wol.service is triggered. I'm going to look at this and see how to fix that.
     
    Regarding the other issue where your system reboot 1-2 minute after the system has been put in suspend mode, my guess is because you have installed OpenMediaVault. OMV install watchdog service by default. When the system is put in standby / suspend mode the watchdog is not stopped, therefore since nothing will be kicking the watchdog in that state, the watchdog will reset the system after 60sec.

    The workaround for now is to disable watchdog service.
     
    sudo systemctl disable watchdog  
    Note: you will need to reboot after you disable the watchdog in order to ensure the watchdog is stopped.
     
    This also something I was supposed to investigate, in order to see how to make systemcl disable / enable watchdog before / after going to suspend mode.
     
    Edit:
    Actually I forgot but I did work with OMV guys to fix the watchdog / suspend issue here : https://github.com/openmediavault/openmediavault/issues/343 But the fix only made to it OMV5.
    You can add manually the tweak for OMV4,  by creating files /lib/systemd/system-sleep/openmediavault and /usr/lib/pm-utils/sleep.d/openmediavault with the code you find in this commit : https://github.com/openmediavault/openmediavault/pull/345/commits/1c162cf69ed00beccdfe65501cb93f6fb1158df0
     
    Edit 2:
    I fixed the issue related to WoL not being enable at startup. The issue was now that we rely completely on NetworkManager to configure the network interface, the unit file that enabled WoL on eth0 didn't use anymore the right target dependency and was called too early. Just edit /lib/systemd/system/helios4-wol.service,
     
    Replace :
    Requires=network.target
    After=network.target
     
    by :
    After=network-online.target
    Wants=network-online.target
     
    then do sudo systemctl daemon-reload and reboot the system
     
  7. Like
    gprovost reacted to SweetKick in Helios4 Support   
    @gprovost I am running OMV 4 and the watchdog service was the issue.
    I reverted back to the default WoL configuration then adjusted the existing helios4-wol.service file with the "network-online.target" changes, I see the Wiki was updated as well.  Creating the two "openmediavault" files with the contents from your commit link and giving them execute permissions solved the issue.
     
    Thanks.
  8. Like
    gprovost got a reaction from SweetKick in Helios4 Support   
    @SweetKick Regarding the issue of WoL not being enable at each startup, there is effectively a timing issue that seems to be new... the eth0 interface is not yet ready when helios4-wol.service is triggered. I'm going to look at this and see how to fix that.
     
    Regarding the other issue where your system reboot 1-2 minute after the system has been put in suspend mode, my guess is because you have installed OpenMediaVault. OMV install watchdog service by default. When the system is put in standby / suspend mode the watchdog is not stopped, therefore since nothing will be kicking the watchdog in that state, the watchdog will reset the system after 60sec.

    The workaround for now is to disable watchdog service.
     
    sudo systemctl disable watchdog  
    Note: you will need to reboot after you disable the watchdog in order to ensure the watchdog is stopped.
     
    This also something I was supposed to investigate, in order to see how to make systemcl disable / enable watchdog before / after going to suspend mode.
     
    Edit:
    Actually I forgot but I did work with OMV guys to fix the watchdog / suspend issue here : https://github.com/openmediavault/openmediavault/issues/343 But the fix only made to it OMV5.
    You can add manually the tweak for OMV4,  by creating files /lib/systemd/system-sleep/openmediavault and /usr/lib/pm-utils/sleep.d/openmediavault with the code you find in this commit : https://github.com/openmediavault/openmediavault/pull/345/commits/1c162cf69ed00beccdfe65501cb93f6fb1158df0
     
    Edit 2:
    I fixed the issue related to WoL not being enable at startup. The issue was now that we rely completely on NetworkManager to configure the network interface, the unit file that enabled WoL on eth0 didn't use anymore the right target dependency and was called too early. Just edit /lib/systemd/system/helios4-wol.service,
     
    Replace :
    Requires=network.target
    After=network.target
     
    by :
    After=network-online.target
    Wants=network-online.target
     
    then do sudo systemctl daemon-reload and reboot the system
     
  9. Like
    gprovost got a reaction from lanefu in Helios4 Support   
    @SweetKick Regarding the issue of WoL not being enable at each startup, there is effectively a timing issue that seems to be new... the eth0 interface is not yet ready when helios4-wol.service is triggered. I'm going to look at this and see how to fix that.
     
    Regarding the other issue where your system reboot 1-2 minute after the system has been put in suspend mode, my guess is because you have installed OpenMediaVault. OMV install watchdog service by default. When the system is put in standby / suspend mode the watchdog is not stopped, therefore since nothing will be kicking the watchdog in that state, the watchdog will reset the system after 60sec.

    The workaround for now is to disable watchdog service.
     
    sudo systemctl disable watchdog  
    Note: you will need to reboot after you disable the watchdog in order to ensure the watchdog is stopped.
     
    This also something I was supposed to investigate, in order to see how to make systemcl disable / enable watchdog before / after going to suspend mode.
     
    Edit:
    Actually I forgot but I did work with OMV guys to fix the watchdog / suspend issue here : https://github.com/openmediavault/openmediavault/issues/343 But the fix only made to it OMV5.
    You can add manually the tweak for OMV4,  by creating files /lib/systemd/system-sleep/openmediavault and /usr/lib/pm-utils/sleep.d/openmediavault with the code you find in this commit : https://github.com/openmediavault/openmediavault/pull/345/commits/1c162cf69ed00beccdfe65501cb93f6fb1158df0
     
    Edit 2:
    I fixed the issue related to WoL not being enable at startup. The issue was now that we rely completely on NetworkManager to configure the network interface, the unit file that enabled WoL on eth0 didn't use anymore the right target dependency and was called too early. Just edit /lib/systemd/system/helios4-wol.service,
     
    Replace :
    Requires=network.target
    After=network.target
     
    by :
    After=network-online.target
    Wants=network-online.target
     
    then do sudo systemctl daemon-reload and reboot the system
     
  10. Like
    gprovost reacted to skat in Helios4: 3D printable bracket for 4 x 2.5" disks   
    Hi, I am using 4 x 2.5 inch disks in the standard Helios4 enclosure. Instead of Laser cutting the smaller enclosure I created a simple 3D printable bracket for it which I'd like to share.
     
    For those interested its here:  https://www.katerkamp.de/maker/helios4/helios4-and-diskbracket.xhtml
     
    Including FreeCAD source and STL files.
     
    Have fun :-)
  11. Like
    gprovost reacted to pekkal in Helios4 Support   
    Thanks! Changing the display fixed the issue: tighten the screws with fingers only!
  12. Like
    gprovost reacted to prok in Helios4 Support   
    @gprovost Thanks for the response. I do see LED8 on and LED1 blinking, but I figured out that my issue was the login prompt wasn't printing even though the connection was successful. I'm assuming it's a problem with my instance of PuTTY, since I'm able to see the password prompt if I enter a username. Thanks again.
  13. Like
    gprovost reacted to sirleon in Helios4 Support   
    You should try to be more polite.
  14. Like
    gprovost reacted to MairusuPawa in Helios4 Support   
    I won't use the backplane. The main issue is that the RAID controller is located there and not on the original motherboard, and I can't interact with it (physically difficult, and, ironically enough, it's an Armada chip with very poor documentation preventing me from running FreeNAS on the thing in the first place). I'll just go with cables as-is for now, and if ever, I'll try to make a custom PCB with just passthrough Power / SATA.
     
    I do want to use the existing status LEDs though, at least. The rest of my plans would be to use the buttons as well (interfacing them with GPIO, but not sure what scenarios I want to have here), re-route the front USB, and maybe have another identical 1.3" OLED screen on the I²C bus to "fill" the Qnap front display cutout. No idea how feasible all of this is; it should all fit, but cable management is likely to be very tricky.
  15. Like
    gprovost reacted to AgentPete in Helios4 Support   
    Can I just say how delighted I am with my new Helios4? It’s a lovely piece of kit, and the online instructions are extremely well-written, too. Many congratulations to everyone concerned, a great job!
     
    I just have a minor query on the OLED screen, see attached pic.
     
    a) top line of the display is garbled, anything I can do about that?
     
    b) I don’t really need to display free space on the SD card, but would like to show the two volumes, /dev/dm-0 and /dev/dm-1. As you can see, dm-0 is not showing correctly, at the moment, showing total space as 997M when it should be 15.82TB. Current sys-oled.conf file is below.
     
    Again, thanks for your hard work!
     
    # Storage Device 1
    # Device name
    storage1_name = sd

    # Device mount path
    storage1_path = /

    # Storage Device 2
    storage2_name= dm-0
    storage2_path= /dev/dm-0
     
    https://www.dropbox.com/s/ie7xclyq8ip7hr4/2019-08-19 08.25.20.jpg?dl=0
  16. Like
    gprovost reacted to alchemist in Helios4 Support   
    Hi Thanks!
    I will take a try for this patch and tell you if it works for me.
     
    Really nice board, it's responsive and well documented
  17. Like
    gprovost reacted to bksoux in Helios4 Support   
    Just received and set up the Helios4! My first NAS.
    Everything's going fine for now. Drives are syncing.
     
     
     
  18. Like
    gprovost reacted to tekrantz in Helios4 Support   
    I had a meter.  I have not seen it lately.  If I can't find it I will go get one.  Thanks for the reply.  I apologize for my petulant tone.  There was NO need for that.
  19. Like
    gprovost reacted to MarcC in Helios4 Support   
    Just to say my batch 3 is working great !
    I just got the Helios4 to boot Archlinux (thanks Gontran !) directly from a SSD on sata1, thanks to @aprayoga ealier tip in this thread about using the scsi command in uboot. That's nice, no more microSD card !
  20. Like
    gprovost got a reaction from Tido in Survey: Any reliable EspressoBin V7 around?   
    @Tido The power consumption for Helios4 is up-to-date on our wiki. I don't really consider that like too much energy for a NAS board.
     
    @barish Just FYI, Helios4 is not a product from Solidrun, we are just using their System-on-Module.
  21. Like
    gprovost got a reaction from sirleon in Helios4 Support   
    @sirleon Based effectively on your I2C scan, no device is detected on channel 1. I would still suspect wrong wiring.
     
    Can you post a picture of your I2C wiring on each side ?
     
    Have you tried another cable ? In the kit there is 2 ribbon cables, a long one (30cm) and a short one (20cm). The short one is just a spare cable. Maybe try with the short one, just to be sure it's not a cable issue first.
     
    The second OLED display you tried is it from another Helios4 kit, or just another display you already had ?
  22. Like
    gprovost got a reaction from lanefu in [RFC] New Naming Convention for Kernel Source Trees   
    I guess we will all have different opinions on the matter but I feel we still forgot to see it from user perspective.
     
    I don't really agree on using debian-like denomination for out use case. We are talking kernel here with a naming system to measure hardware support maturity. It's not because a kernel only supports 60% of the feature of a board that it means it's Unstable. Same the other way it's not because a kernel version support 80% of the feature that it means it's Stable. Using Debian convention will just confuse user because the meaning in reality will be pretty different.
     
    I agree that Next is the most confusing name as for now.
     
    As for Default, I like the idea of renaming it Legacy, and loosen up a bit the definition... because in my case Helios4 has nothing to do with Vendor BSP.
     
    Again let's take a one concrete example. Helios4. latest release (v5.91) is as follow
    Default : U-Boot 2018.11 + LK Mainline 4.14.y (100% hardware support) Next : U-Boot Mainline 2019.04 + LK Mainline 4.19.y (100% hardware support) Dev : U-Boot Mainline 2019.04 + LK Mainline 5.x Let's put aside Dev, because personally I feel Dev meaning is pretty clear... and end of the day we don't publish dev image. Again let's look at it from user perspective, we aren't building image just for ourselves.

    If today I publish 2 images : Helios4 Default Buster and Helios4 Next Buster, what would drive a User to choose Default over Next ?
    I mean both have 100% hardware support, both are based on LK mainline and LK version that longterm ( actually LK 4.14 is much more Longterm than 4.19 if you look at the LK calendar) and both use latest Debian. You might say the choice is pretty trivial, user should pick up Next. But some users might ask themselves shouldn't I use Default since it seems to be the default choice (or in other word recommanded choice).
     
    Maybe it won't sound fancy, but I feel it's the following naming that matches the closest to what the builds really are, and it should be quite clear to user what is what and what he should pick. (Ok maybe it applies better to Helios4 use case than other).
     
    - Legacy
    - Current
    - Dev
     
  23. Like
    gprovost reacted to jimandroidpc in Helios4 Support   
    @gprovost I manually made the change and confirmed the encryption. Now transfer speeds are 75-80MB/sec up/down compared to 50-55 on XTS. 
  24. Like
    gprovost got a reaction from lanefu in Helios4 - Cryptographic Engines And Security Accelerator (CESA) Benchmarking   
    https://wiki.kobol.io/cesa has been updated to be ready for upcoming Debian Buster release. For Debian Buster we will use AF_ALG as Crypto API userspace interface. We choose AF_ALG for Debian Buster because it doesn't require any patching or recompiling but it comes with some limitation. While benchmark shows in some case throughput improvement with AF_ALG, the CPU load is not improved compared to cryptodev or 100% software encryption. This will require further investigation.
     
    So far I didn't succeed to enable cryptodev in Debian version of libssl1.1.1... Need further work, and it's not really our priority right now.
     
    Also for Debian Stretch I updated the prebuild libssl1.0.2 debian deb to latest security update release (libssl1.0.2_1.0.2s). Can be found here.
     
     
  25. Like
    gprovost got a reaction from lanefu in Helios4 Support   
    Armbian Buster is ready, but we are still working at upgrading Linux Kernel and U-boot for Default and Next Armbian branch. Making the upcoming release a major Armbian release For Helios4. New OS images and Debian packages should be published in one to 2 weeks time max.