gprovost

Members
  • Content Count

    236
  • Joined

  • Last visited


Reputation Activity

  1. Like
    gprovost reacted to sfx2000 in Helios64 Annoucement   
    16MB SPI NOR is a good choice, not just for uBoot, but one could put an entire operating system in there (openwrt for example)
     
     
    I agree - @chwe - solder down is going to offer benefit of cost and board space - 16GB is plenty of space considering the other connectivity.
     
    @gprovost - nice board...
  2. Like
    gprovost got a reaction from sfx2000 in Helios64 Annoucement   
    Hi guys,
     
    I guess some might have heard that we (Kobol Team) were spinning a new project to succeed to Helios4. Here it is... Helios64
     

     
     
    We didn't have to look too far for the board name since in essence the goal was to redesign from scratch Helios4 with a 64-bit SoC and try to improve every key features that made Helios4 a very targeted board for NAS setup.
     
    Right now we are at the prototyping test phase, hopefully in a 2 month time will have dozen of Eval boards to send around for evaluation and review... and if all goes well first batch should be available for order in Feb / March 2020.
     
    Happy to answer any question :-)
     
  3. Like
    gprovost reacted to DBwpg in Helios64 Annoucement   
    Great timing.  Just started doing some research to replace my 4 bay Qnap NAS that will reach EOL/software support by the end of next year.  Have been looking at various SBC's but couldn't find one that supported 4-5 sata drives other than the NANO PI SATA HAT  configuration which I wasn't to keen on.  The Helios64 sure looks like it will do the job just nicely.  The kit build will especially nail it for me.😁
  4. Like
    gprovost reacted to lanefu in Helios4 Support   
    Hi @gprovost  I received my new power supply and all my drives are running again.   That resolved the issue. Thanks for your help!
  5. Like
    gprovost got a reaction from NicoD in Helios64 Annoucement   
    Hi guys,
     
    I guess some might have heard that we (Kobol Team) were spinning a new project to succeed to Helios4. Here it is... Helios64
     

     
     
    We didn't have to look too far for the board name since in essence the goal was to redesign from scratch Helios4 with a 64-bit SoC and try to improve every key features that made Helios4 a very targeted board for NAS setup.
     
    Right now we are at the prototyping test phase, hopefully in a 2 month time will have dozen of Eval boards to send around for evaluation and review... and if all goes well first batch should be available for order in Feb / March 2020.
     
    Happy to answer any question :-)
     
  6. Like
    gprovost got a reaction from chwe in Helios64 Annoucement   
    Yeah I should have mentioned that also. Through the same USB Type-C we also support Flash over USB with the rockchip dev/flash tools available. The recovery button is on one of 3 push button on the board. Each on-board memory (SPI and eMMC) can be disabled with a proper jumper ( No need to have 16 fingers to flash the board ).
     
    We will try our best. Also we will sell a simple adapter kit in order for people to use Helios64 with any mini-ITX and ATX PSU. This way people could recycle their old case and save money.
  7. Like
    gprovost got a reaction from chwe in Helios64 Annoucement   
    We will offer an ECC option. The Rockchip RK3399 SoC itself doesn't have a memory controller that has ECC feature, however we are currently working with a SDRAM vendor that now offer built-in ECC feature inside the SDRAM directly. It's not impossible it will be available on day one.
     
     
     
    No. It is not a use case that make sense from our point of view. I don't believe there will be a lot of people out there with a USB Type-C power adapter that can delivered up to 100W. Plus the power circuitry (and the PSU) would be quite expensive.
     
     
    No. It's a pure M.2 SATA port. FYI it's share with the SATA port 1, so the combination is either ( 1x M.2 SSD + 4x HDD/SSD ) or 5x HDD/SSD.
     
     
    Yes there is a 128Mb SPI NOR Flash.
     
     
    Something we didn't disclose because too much info already on the picture, the USB Type-C has been designed with a 4-in-1 mode concept :
    - Display Port
    - DAS mode
    - Host mode
    - Serial Console Access
    We will explain more at a later stage... but it's quite a unique design we did.
     
     
    We wanted to make the eMMC a basic feature of our board. I mean what's the point to sell it as a module when you know that in 90% of the case your user should use eMMC instead of SDcard. It also help in term of software support to know that all user Helios64 user will have the same config, therefore we can focus on installation guide that use the eMMC.
     
     
     
    You guess right this is not just some bare bone carrier board with a SoC in the middle. Between the PCIe-to-SATA bridge, the 2.5Gbe interface, the built-in UPS feature, RAM, eMMC, etc... it adds-up quickly. We also paid a lot of attention on the different power rails, this is a premium design compare to all the RK3399 board out there.
    Helios4 was USD200 with the full kit (PSU, case, cable, fan, etc...), our goal it to try to be in the same ballpark. This time we will sell a-la-carte style, you can just buy the board alone, our with the case, if you already have a PSU then no need to order one...
     
  8. Like
    gprovost got a reaction from jimandroidpc in Helios64 Annoucement   
    We will offer an ECC option. The Rockchip RK3399 SoC itself doesn't have a memory controller that has ECC feature, however we are currently working with a SDRAM vendor that now offer built-in ECC feature inside the SDRAM directly. It's not impossible it will be available on day one.
     
     
     
    No. It is not a use case that make sense from our point of view. I don't believe there will be a lot of people out there with a USB Type-C power adapter that can delivered up to 100W. Plus the power circuitry (and the PSU) would be quite expensive.
     
     
    No. It's a pure M.2 SATA port. FYI it's share with the SATA port 1, so the combination is either ( 1x M.2 SSD + 4x HDD/SSD ) or 5x HDD/SSD.
     
     
    Yes there is a 128Mb SPI NOR Flash.
     
     
    Something we didn't disclose because too much info already on the picture, the USB Type-C has been designed with a 4-in-1 mode concept :
    - Display Port
    - DAS mode
    - Host mode
    - Serial Console Access
    We will explain more at a later stage... but it's quite a unique design we did.
     
     
    We wanted to make the eMMC a basic feature of our board. I mean what's the point to sell it as a module when you know that in 90% of the case your user should use eMMC instead of SDcard. It also help in term of software support to know that all user Helios64 user will have the same config, therefore we can focus on installation guide that use the eMMC.
     
     
     
    You guess right this is not just some bare bone carrier board with a SoC in the middle. Between the PCIe-to-SATA bridge, the 2.5Gbe interface, the built-in UPS feature, RAM, eMMC, etc... it adds-up quickly. We also paid a lot of attention on the different power rails, this is a premium design compare to all the RK3399 board out there.
    Helios4 was USD200 with the full kit (PSU, case, cable, fan, etc...), our goal it to try to be in the same ballpark. This time we will sell a-la-carte style, you can just buy the board alone, our with the case, if you already have a PSU then no need to order one...
     
  9. Like
    gprovost got a reaction from abreyu in Helios64 Annoucement   
    We will offer an ECC option. The Rockchip RK3399 SoC itself doesn't have a memory controller that has ECC feature, however we are currently working with a SDRAM vendor that now offer built-in ECC feature inside the SDRAM directly. It's not impossible it will be available on day one.
     
     
     
    No. It is not a use case that make sense from our point of view. I don't believe there will be a lot of people out there with a USB Type-C power adapter that can delivered up to 100W. Plus the power circuitry (and the PSU) would be quite expensive.
     
     
    No. It's a pure M.2 SATA port. FYI it's share with the SATA port 1, so the combination is either ( 1x M.2 SSD + 4x HDD/SSD ) or 5x HDD/SSD.
     
     
    Yes there is a 128Mb SPI NOR Flash.
     
     
    Something we didn't disclose because too much info already on the picture, the USB Type-C has been designed with a 4-in-1 mode concept :
    - Display Port
    - DAS mode
    - Host mode
    - Serial Console Access
    We will explain more at a later stage... but it's quite a unique design we did.
     
     
    We wanted to make the eMMC a basic feature of our board. I mean what's the point to sell it as a module when you know that in 90% of the case your user should use eMMC instead of SDcard. It also help in term of software support to know that all user Helios64 user will have the same config, therefore we can focus on installation guide that use the eMMC.
     
     
     
    You guess right this is not just some bare bone carrier board with a SoC in the middle. Between the PCIe-to-SATA bridge, the 2.5Gbe interface, the built-in UPS feature, RAM, eMMC, etc... it adds-up quickly. We also paid a lot of attention on the different power rails, this is a premium design compare to all the RK3399 board out there.
    Helios4 was USD200 with the full kit (PSU, case, cable, fan, etc...), our goal it to try to be in the same ballpark. This time we will sell a-la-carte style, you can just buy the board alone, our with the case, if you already have a PSU then no need to order one...
     
  10. Like
    gprovost got a reaction from Igor in Helios64 Annoucement   
    Hi guys,
     
    I guess some might have heard that we (Kobol Team) were spinning a new project to succeed to Helios4. Here it is... Helios64
     

     
     
    We didn't have to look too far for the board name since in essence the goal was to redesign from scratch Helios4 with a 64-bit SoC and try to improve every key features that made Helios4 a very targeted board for NAS setup.
     
    Right now we are at the prototyping test phase, hopefully in a 2 month time will have dozen of Eval boards to send around for evaluation and review... and if all goes well first batch should be available for order in Feb / March 2020.
     
    Happy to answer any question :-)
     
  11. Like
    gprovost got a reaction from chwe in Helios64 Annoucement   
    Hi guys,
     
    I guess some might have heard that we (Kobol Team) were spinning a new project to succeed to Helios4. Here it is... Helios64
     

     
     
    We didn't have to look too far for the board name since in essence the goal was to redesign from scratch Helios4 with a 64-bit SoC and try to improve every key features that made Helios4 a very targeted board for NAS setup.
     
    Right now we are at the prototyping test phase, hopefully in a 2 month time will have dozen of Eval boards to send around for evaluation and review... and if all goes well first batch should be available for order in Feb / March 2020.
     
    Happy to answer any question :-)
     
  12. Like
    gprovost reacted to aprayoga in Helios4 Support   
    @jimbolaya, I know it's quite late, ch341 driver would be included on next release. Refer to commit 14a0a54.
     
     
    @smith69085, we have updated the wiki page how to use the GPIO  for button. You can take a look on
    https://wiki.kobol.io/gpio/#use-gpio-with-device-tree-overlay
     
    But currently the base dtb (armada-388-helios4.dtb) on Armbian 5.91 is still not compiled with overlay support.  You can download the attached dtb, rename and replace the dtb on your system.
     
     
    lk4.14_armada-388-helios4.dtb is for Armbian Default (Stretch, Linux Kernel 4.14)
    sudo cp lk4.14_armada-388-helios4.dtb /boot/dtb/armada-388-helios4.dtb  
     
    lk4.19_armada-388-helios4.dtb is for Armbian Next (Buster, Linux Kernel 4.19)
     
    sudo cp lk4.19_armada-388-helios4.dtb /boot/dtb/armada-388-helios4.dtb  
  13. Like
    gprovost reacted to tuxd3v in Helios4 Support   
    I am starting to think in adding 1 more :)
  14. Like
    gprovost got a reaction from lanefu in Helios4 Support   
    @lanefu Ok please send me a PM to arrange replacement of PSU.
  15. Like
    gprovost reacted to Jeckyll in Helios4 ECC - CPU freq info - air flow   
    Hello,
     
    no i dont think so, i know the sound of "fan eat cable" its more something vibrating. However, yesterday i put a piece of silicon under the Helios, since then everything is fine.
    The fans are spinning in an expected noise-level. I still guess the case is not tight enough and starts vibrating. 
    My Helios stands on a wooden floor, maybe its not a good combination.
     
    Independently from my experience, maybe you could add a silicon ring for the HDD screws between HDD and case in the next Batch.
     
    Thanks for your support, im fine and happy now
  16. Like
    gprovost reacted to Bramani in Helios4 ECC - CPU freq info - air flow   
    Apologies again, I started this thread out of eagerness and curiosity. Not to create confusion. This said, the fans should be relatively quiet once fancontrol kicked in after boot, but that depends very much on your environment. You are right though, Batch 3 fans cannot be shut off (see Wiki). I am not sure whether I understand your second question correctly, are you considering to reverse the fans (and the air flow)? Please don't. As gprovost wrote, the default direction should be the optimum for most use cases. Keeping the HDDs primarily at safe operating temperatures is what really matters in a NAS.
     
    The overview are just a few shell functions I whipped up for comparison of the two units. They have several flaws. For instance, I have used sd[a|b|c|d] names instead of proper UUIDs. The label "RPM" ist wrong, it should read "PWM" instead. And I am not sure if the temperature representation is correct. But nevertheless, here they are. Note that getCPUFreq and getCPUStats currently do not work on Buster, but on Stretch only.
     
    Add these to your non-privileged user's .bashrc and reload with "source .bashrc" or relogin. After that, simply enter "getSysStatus" on the commandline to print the overview.
    # Print current CPU frequency getCPUFreq() { local i freq for i in 0 1; do freq=$(cat /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_cur_freq) printf "%s\n" "CPU $i freq.: ${freq%???} MHz" done } # Print CPU statistics getCPUStats() { local i stats for i in 0 1; do # This works, but it needs three expensive syscalls to external commands #stats="$(cpufreq-info -c $i | grep 'stats' | sed 's/,/;/;s/:/: /g;s/\./,/g')" # Same, but reduced by one stats="$(cpufreq-info -c $i | \ awk '/stats/{gsub(",",";");gsub(":",": ");gsub("\.",",");print}')" # Cut front and end from string; this could be done in awk, too, but the # resulting expression would be long and hard to decipher for non-awk users. # Using shell internal string functions should not be that expensive, either. stats="${stats#*: }" stats="${stats% *}" # Finally, print the resulting string, nicely formatted printf "%s\n" "CPU $i stats: ${stats}" done } # Print system fans speed getFanSpeed() { local i j=3 speed for i in 10 17; do speed=$(cat /sys/devices/platform/j$i-pwm/hwmon/hwmon$j/pwm1) printf "%s\n" "Fan J$i RPM: ${speed}" ((j++)) done } # Print SoC temperature getSoCTemp() { local temp=$(cat /sys/devices/virtual/thermal/thermal_zone0/temp) printf "%s\n" "SoC core temp.: ${temp%???},${temp: -3}" } # Print ambient temperature getAmbientTemp() { local temp=$(cat /dev/thermal-board/temp1_input) printf "%s\n" "Ambient temp.: ${temp%???},${temp: -3}" } # Print temperature of all HDDs getDriveTemps() { local i temp for i in /dev/sd[abcd]; do temp=$(sudo /usr/sbin/smartctl -a $i | awk '/^194/{print $10}') printf "%s\n" "$i temp.: ${temp}" done } # Print current power mode status of all HDDs getDriveStates() { local i state for i in /dev/sd[abcd]; do state="$(sudo /sbin/hdparm -C $i)" printf "%s\n" "$i state: ${state##* }" done } # Print system status getSysStatus() { # printf "\n" # getCPUStats # printf "\n" # getCPUFreq printf "\n" getFanSpeed printf "\n" getSoCTemp getAmbientTemp printf "\n" getDriveTemps printf "\n" getDriveStates }  
  17. Like
    gprovost got a reaction from uiop in Helios4 Support   
    @uiop The 32-bit architecture limitation is actually on Linux page cache which is used by file systems, therefore limiting the max usable size of a partition or logical volume to 16TB. This doesn't stop you to have several partition (or logical volume) of <16TB.
     
    Here an example taking in consideration 4x 12TB HDD :
     
    If you use mdadm and you setup a RAID6 or RAID10 array, you will have an array of 24TB of usable space .
    You can then create 2x partition of 12TB or any other combination that would max out the array size, till a each partition doesn't exceed 16TB.
     
    If you use lvm and you setup a Volume Group (VG) with the 4 drives (PV), you will get a volume group of 48TB of usable space.
    You can then create as many (max: 255) Logical Volume (LV) to max out the VG size till each LV doesn't exceed 16TB.
     
    Actually a good approach is to use LVM on top of MDADM Raid, this way it gives flexibility to resize your partition, or i should say Logical Volume (LV).
     
    After you can of course neither use mdadm or lvm, and just setup individually each disk... till you follow the rules that you can't create a single partition bigger than 16TB, but you can create several partition.
     
    Hope it clarifies.
     
     
     
     
     
     
  18. Like
    gprovost reacted to qstaq in ZFS on Helios4   
    There are license incompatibility problems and patent risks with ZFS on linux. ZFS is CDDL licenses which risks potential patents lawsuits issues for anyone publishing ZFS code contributions with a GPL license. Thats why ZFS will not be in any mainline kernel any time soon. The CDDL license was seemingly designed by Sun to be incompatible with the GPL license. That doesnt mean you cant use ZFS on linux, you just have to use CDDL licensed ZFS to code to be granted Oracles patent protection / immunity. Debian, Ubuntu, etc get round this by shipping the ZFS module as a DKMS built out of tree module and not as part of the kernel
     
    I would suggest that there are currently too many problems with ZFS, both from a practical and legal viewpoint, to think about making ZFS a core part of Armbian. Most of Armbian's target hardware is under specified for a default setup of ZFS and proper storage management under ZFS requires much more knowledge than mdraid and ext4 or BTRFS. Its not complex, it just requires you to think and plan more at the deployment stage and have an awareness of some storage concepts that most users dont have knowledge of
     
    Now on to the positive news  ZFS is an excellent filesystem and for certain use cases its way more powerful and flexible than BTRFS or any lvm / mdraid / ext4 combination, its also pretty simple to admin once you learn the basic concepts.  Im sure there are a small portion of Armbian users that would get significant benefit from having ZFS available as a storage option. Even better news is that fundamentally every Armbian system already support ZFS at a basic level as both Debian and Ubuntu have the zfs-dkms package already available in the repos. It takes less than 5 minutes to enable ZFS support on any Armbian image just by installing the following packages with apt: zfs-dkms, zfs-initramfs & zfsutils-linux (probably zfsnap is also wanted but not required).  The problem is that the default config will definitely not be suitable for a 1-2GB RAM SBC and we would need a more appropriate default config creating. I still wouldn't recommend ZFS use for the rootfs, if you want snapshots on your rootfs then use BTRFS or use ext4 with timeshift.
     
    @Igor @gprovost I would think that the best place to implement this would be as an extra option in armbian-config -> software. I wrote a simple script to install ZFS support and to set some default options for compression, arc, cache, etc for low mem / cpu devices. The same script works without changes on Debian Buster, Ubuntu 16.04, 18.04 & 19.04 as the process is identical on all OS variants as far as I can discern
     
    Is https://github.com/armbian/config the live location for the armbian-config utility? If so I will have a look at adding a basic ZFS option in software
     
     
  19. Like
    gprovost reacted to qstaq in ZFS on Helios4   
    Armbian kernel configs (the ones that I have looked at anyway) dont have the config options to build the ZFS modules. You will have to build a new kernel or build the module out of tree
     
    Also there is a lot of misunderstanding about ZFS ram use. The main culprit of the seemingly huge ram use of ZFS is the de-duplication feature. If you disable de-duplication then you can run a simple samba / nfs file server with 4-6TB of usable storage in about 650MB RAM. I have even run simple zfs pools on OPi Zero with 512MB RAM with de-dup disabled and a couple of config tweaks using a 1TB SSD over USB for storage, though I would suggest that 750MB RAM is a more sensible minimum
     
    The key is to disable de-dup BEFORE you write any data to the pools, disabling afterwards wont lower RAM use. You can also limit the RAM used by setting zfs_arc_max to 25% of system RAM instead of the default of 50% and disable vdev cache with zfs_vdev_cache_size = 0. I have also found that setting Armbian's ZRAM swap to 25-30% of RAM instead of 50% of RAM improves performance. Obviously performance isnt going to be quite as good as a machine with 8GB RAM but the difference isnt that big either
  20. Like
    gprovost reacted to skat in Helios4: 3D printable bracket for 4 x 2.5" disks   
    Tnx :-)
    It took about 3 hours for the base, 1 hour for the top bracket. It depends on material and settings. I printed them with 0.35mm layer height and 20% infill. The black one is ABS, red one PLA. I guess PLA will do as well for the base, as there is enough cooling inside (PLA has a lower melting point and I wanted to make sure the screws dont fall out because of some heat)
     
    Maybe its best to install PrusaSlicer https://www.prusa3d.com/prusaslicer/  and load the STL files and play with settings. Then slice and export gcode, and Slicer will tell all about material needed and how long it will take to print.
  21. Like
    gprovost got a reaction from jimandroidpc in Helios4 Support   
    @SweetKick Regarding the issue of WoL not being enable at each startup, there is effectively a timing issue that seems to be new... the eth0 interface is not yet ready when helios4-wol.service is triggered. I'm going to look at this and see how to fix that.
     
    Regarding the other issue where your system reboot 1-2 minute after the system has been put in suspend mode, my guess is because you have installed OpenMediaVault. OMV install watchdog service by default. When the system is put in standby / suspend mode the watchdog is not stopped, therefore since nothing will be kicking the watchdog in that state, the watchdog will reset the system after 60sec.

    The workaround for now is to disable watchdog service.
     
    sudo systemctl disable watchdog  
    Note: you will need to reboot after you disable the watchdog in order to ensure the watchdog is stopped.
     
    This also something I was supposed to investigate, in order to see how to make systemcl disable / enable watchdog before / after going to suspend mode.
     
    Edit:
    Actually I forgot but I did work with OMV guys to fix the watchdog / suspend issue here : https://github.com/openmediavault/openmediavault/issues/343 But the fix only made to it OMV5.
    You can add manually the tweak for OMV4,  by creating files /lib/systemd/system-sleep/openmediavault and /usr/lib/pm-utils/sleep.d/openmediavault with the code you find in this commit : https://github.com/openmediavault/openmediavault/pull/345/commits/1c162cf69ed00beccdfe65501cb93f6fb1158df0
     
    Edit 2:
    I fixed the issue related to WoL not being enable at startup. The issue was now that we rely completely on NetworkManager to configure the network interface, the unit file that enabled WoL on eth0 didn't use anymore the right target dependency and was called too early. Just edit /lib/systemd/system/helios4-wol.service,
     
    Replace :
    Requires=network.target
    After=network.target
     
    by :
    After=network-online.target
    Wants=network-online.target
     
    then do sudo systemctl daemon-reload and reboot the system
     
  22. Like
    gprovost reacted to SweetKick in Helios4 Support   
    @gprovost I am running OMV 4 and the watchdog service was the issue.
    I reverted back to the default WoL configuration then adjusted the existing helios4-wol.service file with the "network-online.target" changes, I see the Wiki was updated as well.  Creating the two "openmediavault" files with the contents from your commit link and giving them execute permissions solved the issue.
     
    Thanks.
  23. Like
    gprovost got a reaction from SweetKick in Helios4 Support   
    @SweetKick Regarding the issue of WoL not being enable at each startup, there is effectively a timing issue that seems to be new... the eth0 interface is not yet ready when helios4-wol.service is triggered. I'm going to look at this and see how to fix that.
     
    Regarding the other issue where your system reboot 1-2 minute after the system has been put in suspend mode, my guess is because you have installed OpenMediaVault. OMV install watchdog service by default. When the system is put in standby / suspend mode the watchdog is not stopped, therefore since nothing will be kicking the watchdog in that state, the watchdog will reset the system after 60sec.

    The workaround for now is to disable watchdog service.
     
    sudo systemctl disable watchdog  
    Note: you will need to reboot after you disable the watchdog in order to ensure the watchdog is stopped.
     
    This also something I was supposed to investigate, in order to see how to make systemcl disable / enable watchdog before / after going to suspend mode.
     
    Edit:
    Actually I forgot but I did work with OMV guys to fix the watchdog / suspend issue here : https://github.com/openmediavault/openmediavault/issues/343 But the fix only made to it OMV5.
    You can add manually the tweak for OMV4,  by creating files /lib/systemd/system-sleep/openmediavault and /usr/lib/pm-utils/sleep.d/openmediavault with the code you find in this commit : https://github.com/openmediavault/openmediavault/pull/345/commits/1c162cf69ed00beccdfe65501cb93f6fb1158df0
     
    Edit 2:
    I fixed the issue related to WoL not being enable at startup. The issue was now that we rely completely on NetworkManager to configure the network interface, the unit file that enabled WoL on eth0 didn't use anymore the right target dependency and was called too early. Just edit /lib/systemd/system/helios4-wol.service,
     
    Replace :
    Requires=network.target
    After=network.target
     
    by :
    After=network-online.target
    Wants=network-online.target
     
    then do sudo systemctl daemon-reload and reboot the system
     
  24. Like
    gprovost got a reaction from lanefu in Helios4 Support   
    @SweetKick Regarding the issue of WoL not being enable at each startup, there is effectively a timing issue that seems to be new... the eth0 interface is not yet ready when helios4-wol.service is triggered. I'm going to look at this and see how to fix that.
     
    Regarding the other issue where your system reboot 1-2 minute after the system has been put in suspend mode, my guess is because you have installed OpenMediaVault. OMV install watchdog service by default. When the system is put in standby / suspend mode the watchdog is not stopped, therefore since nothing will be kicking the watchdog in that state, the watchdog will reset the system after 60sec.

    The workaround for now is to disable watchdog service.
     
    sudo systemctl disable watchdog  
    Note: you will need to reboot after you disable the watchdog in order to ensure the watchdog is stopped.
     
    This also something I was supposed to investigate, in order to see how to make systemcl disable / enable watchdog before / after going to suspend mode.
     
    Edit:
    Actually I forgot but I did work with OMV guys to fix the watchdog / suspend issue here : https://github.com/openmediavault/openmediavault/issues/343 But the fix only made to it OMV5.
    You can add manually the tweak for OMV4,  by creating files /lib/systemd/system-sleep/openmediavault and /usr/lib/pm-utils/sleep.d/openmediavault with the code you find in this commit : https://github.com/openmediavault/openmediavault/pull/345/commits/1c162cf69ed00beccdfe65501cb93f6fb1158df0
     
    Edit 2:
    I fixed the issue related to WoL not being enable at startup. The issue was now that we rely completely on NetworkManager to configure the network interface, the unit file that enabled WoL on eth0 didn't use anymore the right target dependency and was called too early. Just edit /lib/systemd/system/helios4-wol.service,
     
    Replace :
    Requires=network.target
    After=network.target
     
    by :
    After=network-online.target
    Wants=network-online.target
     
    then do sudo systemctl daemon-reload and reboot the system
     
  25. Like
    gprovost reacted to skat in Helios4: 3D printable bracket for 4 x 2.5" disks   
    Hi, I am using 4 x 2.5 inch disks in the standard Helios4 enclosure. Instead of Laser cutting the smaller enclosure I created a simple 3D printable bracket for it which I'd like to share.
     
    For those interested its here:  https://www.katerkamp.de/maker/helios4/helios4-and-diskbracket.xhtml
     
    Including FreeCAD source and STL files.
     
    Have fun :-)