Jump to content

ZFS on Helios4


SvenHz

Recommended Posts

Armbianmonitor:

Hi all, I am trying to get ZFS up and running on my new Helios4. My apologies if I am overlooking something trivial. My assumption is that I need a different image?

 

Steps taken:

 

# apt-get install zfsutils-linux

 

At the final stage of installation (configuration), the error output is:

 

The ZFS modules are not loaded.
Try running '/sbin/modprobe zfs' as root to load them.
zfs-share.service: Main process exited, code=exited, status=1/FAILURE
zfs-share.service: Failed with result 'exit-code'.

 

Image details:

 

Helios4 provided Armbian image Debian 10 - Buster, 02/08/2019

https://cdn.kobol.io/files/Armbian_5.91_Helios4_Debian_buster_next_4.19.63.7z

 

Link to comment
Share on other sites

Armbian kernel configs (the ones that I have looked at anyway) dont have the config options to build the ZFS modules. You will have to build a new kernel or build the module out of tree

 

Also there is a lot of misunderstanding about ZFS ram use. The main culprit of the seemingly huge ram use of ZFS is the de-duplication feature. If you disable de-duplication then you can run a simple samba / nfs file server with 4-6TB of usable storage in about 650MB RAM. I have even run simple zfs pools on OPi Zero with 512MB RAM with de-dup disabled and a couple of config tweaks using a 1TB SSD over USB for storage, though I would suggest that 750MB RAM is a more sensible minimum

 

The key is to disable de-dup BEFORE you write any data to the pools, disabling afterwards wont lower RAM use. You can also limit the RAM used by setting zfs_arc_max to 25% of system RAM instead of the default of 50% and disable vdev cache with zfs_vdev_cache_size = 0. I have also found that setting Armbian's ZRAM swap to 25-30% of RAM instead of 50% of RAM improves performance. Obviously performance isnt going to be quite as good as a machine with 8GB RAM but the difference isnt that big either

Link to comment
Share on other sites

8 hours ago, qstaq said:

Armbian kernel configs (the ones that I have looked at anyway) dont have the config options to build the ZFS modules. You will have to build a new kernel or build the module out of tree


If you can benefit out of this, send a PR with changes and we can rebuild the kernel ... @gprovost?

Link to comment
Share on other sites

As recommended by @qstaq it needs to be build out of tree (DKMS)... it's too complicated to add it to Armbian build routine since it's not just toggling an option in kernel config.

 

@SvenHz It's actually pretty straight forward, since there is a Debian package that take care automatically to build the ZFS as DKMS module.

 

sudo apt-get install linux-headers-next-mvebu zfs-dkms zfsutils-linux

 

You can refer to ZFSonLinux Debian install instruction here : https://github.com/zfsonlinux/zfs/wiki/Debian if you want to force APT to use more recent version of ZFS from the Debian backports.

 

As pointed out by @qstaq you must disable de-duplication feature which is the source of misunderstood thumb rules of 1GB RAM for 1TB. And follow his other useful tips.

 

We have couple of people that reported good feedback and decent performance with ZoL on Helios4. Most of them where using mirror vdev instead of raidz mode though, you can find plenty of discussion on this topic (mirror vdev vs raidz) which is an important factor you must consider if you want decent performance on Helios4 with ZoL.

 

 

 

 

 

Link to comment
Share on other sites

Thank you all for chiming in. And yes my requirements are modest: 2x 3TB in mirror, no dedup.

 

I got a bit further but the DKMS part is failing, any suggestions? EDIT: it looks like this is a common error, found some info elsewhere, still working on this...

Spoiler



# apt-get install linux-headers-next-mvebu zfs-dkms zfsutils-linux
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  dkms libelf-dev libnvpair1linux libuutil1linux libzfs2linux libzpool2linux spl-dkms
Suggested packages:
  python3-apport menu spl nfs-kernel-server samba-common-bin zfs-initramfs | zfs-dracut
Recommended packages:
  fakeroot linux-headers-686-pae | linux-headers-amd64 | linux-headers-generic | linux-headers zfs-zed
The following NEW packages will be installed:
  dkms libelf-dev libnvpair1linux libuutil1linux libzfs2linux libzpool2linux linux-headers-next-mvebu spl-dkms zfs-dkms zfsutils-linux
0 upgraded, 10 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/13.6 MB of archives.
After this operation, 91.4 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Preconfiguring packages ...
Selecting previously unselected package dkms.
(Reading database ... 33082 files and directories currently installed.)
Preparing to unpack .../archives/dkms_2.6.1-4_all.deb ...
Unpacking dkms (2.6.1-4) ...
Selecting previously unselected package libelf-dev:armhf.
Preparing to unpack .../libelf-dev_0.176-1.1_armhf.deb ...
Unpacking libelf-dev:armhf (0.176-1.1) ...
Selecting previously unselected package spl-dkms.
Preparing to unpack .../spl-dkms_0.7.12-2_all.deb ...
Unpacking spl-dkms (0.7.12-2) ...
Setting up dkms (2.6.1-4) ...
Setting up libelf-dev:armhf (0.176-1.1) ...
Setting up spl-dkms (0.7.12-2) ...
Loading new spl-0.7.12 DKMS files...
Building for 4.19.63-mvebu
Module build for kernel 4.19.63-mvebu was skipped since the
kernel headers for this kernel does not seem to be installed.
Selecting previously unselected package zfs-dkms.
(Reading database ... 33440 files and directories currently installed.)
Preparing to unpack .../0-zfs-dkms_0.7.12-2+deb10u1_all.deb ...
Unpacking zfs-dkms (0.7.12-2+deb10u1) ...
Selecting previously unselected package libuutil1linux.
Preparing to unpack .../1-libuutil1linux_0.7.12-2+deb10u1_armhf.deb ...
Unpacking libuutil1linux (0.7.12-2+deb10u1) ...
Selecting previously unselected package libnvpair1linux.
Preparing to unpack .../2-libnvpair1linux_0.7.12-2+deb10u1_armhf.deb ...
Unpacking libnvpair1linux (0.7.12-2+deb10u1) ...
Selecting previously unselected package libzpool2linux.
Preparing to unpack .../3-libzpool2linux_0.7.12-2+deb10u1_armhf.deb ...
Unpacking libzpool2linux (0.7.12-2+deb10u1) ...
Selecting previously unselected package libzfs2linux.
Preparing to unpack .../4-libzfs2linux_0.7.12-2+deb10u1_armhf.deb ...
Unpacking libzfs2linux (0.7.12-2+deb10u1) ...
Selecting previously unselected package linux-headers-next-mvebu.
Preparing to unpack .../5-linux-headers-next-mvebu_5.91_armhf.deb ...
Unpacking linux-headers-next-mvebu (5.91) ...
Selecting previously unselected package zfsutils-linux.
Preparing to unpack .../6-zfsutils-linux_0.7.12-2+deb10u1_armhf.deb ...
Unpacking zfsutils-linux (0.7.12-2+deb10u1) ...
Setting up linux-headers-next-mvebu (5.91) ...
Compiling headers - please wait ...
Setting up libuutil1linux (0.7.12-2+deb10u1) ...
Setting up zfs-dkms (0.7.12-2+deb10u1) ...
WARNING: Building ZFS module on a 32-bit kernel.
Loading new zfs-0.7.12 DKMS files...
Building for 4.19.63-mvebu
Building initial module for 4.19.63-mvebu
configure: error:
        *** Please make sure the kmod spl devel <kernel> package for your
        *** distribution is installed then try again.  If that fails you
        *** can specify the location of the spl objects with the
        *** '--with-spl-obj=PATH' option.  Failed to find spl_config.h in
        *** any of the following:
        /usr/src/spl-0.7.12/4.19.63-mvebu
        /usr/src/spl-0.7.12
Error! Bad return status for module build on kernel: 4.19.63-mvebu (armv7l)
Consult /var/lib/dkms/zfs/0.7.12/build/make.log for more information.
dpkg: error processing package zfs-dkms (--configure):
 installed zfs-dkms package post-installation script subprocess returned error exit status 10


 

 

Edited by TRS-80
put code in spoiler
Link to comment
Share on other sites

There are license incompatibility problems and patent risks with ZFS on linux. ZFS is CDDL licenses which risks potential patents lawsuits issues for anyone publishing ZFS code contributions with a GPL license. Thats why ZFS will not be in any mainline kernel any time soon. The CDDL license was seemingly designed by Sun to be incompatible with the GPL license. That doesnt mean you cant use ZFS on linux, you just have to use CDDL licensed ZFS to code to be granted Oracles patent protection / immunity. Debian, Ubuntu, etc get round this by shipping the ZFS module as a DKMS built out of tree module and not as part of the kernel

 

I would suggest that there are currently too many problems with ZFS, both from a practical and legal viewpoint, to think about making ZFS a core part of Armbian. Most of Armbian's target hardware is under specified for a default setup of ZFS and proper storage management under ZFS requires much more knowledge than mdraid and ext4 or BTRFS. Its not complex, it just requires you to think and plan more at the deployment stage and have an awareness of some storage concepts that most users dont have knowledge of

 

Now on to the positive news :) ZFS is an excellent filesystem and for certain use cases its way more powerful and flexible than BTRFS or any lvm / mdraid / ext4 combination, its also pretty simple to admin once you learn the basic concepts.  Im sure there are a small portion of Armbian users that would get significant benefit from having ZFS available as a storage option. Even better news is that fundamentally every Armbian system already support ZFS at a basic level as both Debian and Ubuntu have the zfs-dkms package already available in the repos. It takes less than 5 minutes to enable ZFS support on any Armbian image just by installing the following packages with apt: zfs-dkms, zfs-initramfs & zfsutils-linux (probably zfsnap is also wanted but not required).  The problem is that the default config will definitely not be suitable for a 1-2GB RAM SBC and we would need a more appropriate default config creating. I still wouldn't recommend ZFS use for the rootfs, if you want snapshots on your rootfs then use BTRFS or use ext4 with timeshift.

 

@Igor @gprovost I would think that the best place to implement this would be as an extra option in armbian-config -> software. I wrote a simple script to install ZFS support and to set some default options for compression, arc, cache, etc for low mem / cpu devices. The same script works without changes on Debian Buster, Ubuntu 16.04, 18.04 & 19.04 as the process is identical on all OS variants as far as I can discern

 

Is https://github.com/armbian/config the live location for the armbian-config utility? If so I will have a look at adding a basic ZFS option in software

 

 

Link to comment
Share on other sites

I am still stuck with spl-dkms. I am now on backports.

#apt-get install -t buster-backports spl-dkms
Loading new spl-0.7.13 DKMS files...
Building for 4.19.63-mvebu
Building initial module for 4.19.63-mvebu
configure: error: *** Unable to build an empty module.
Error! Bad return status for module build on kernel: 4.19.63-mvebu (armv7l)
Consult /var/lib/dkms/spl/0.7.13/build/make.log for more information.
dpkg: error processing package spl-dkms (--configure):
 installed spl-dkms package post-installation script subprocess returned error exit status 10
Errors were encountered while processing:
 spl-dkms
E: Sub-process /usr/bin/dpkg returned an error code (1)

Interestingly, at that point, I can successfully do a manual "dkms build spl/0.7.13" and "dkms install spl/0.7.13"...!! However if I then do "apt-get install zfs-dkms", it decides to de-install spl-dkms, reinstall it, and fail with the above error :-)

 

Any ideas at this point? :-)

Link to comment
Share on other sites

6 minutes ago, SvenHz said:

I am still stuck with spl-dkms. I am now on backports.

Interestingly, at that point, I can successfully do a manual "dkms build spl/0.7.13" and "dkms install spl/0.7.13"...!! However if I then do "apt-get install zfs-dkms", it decides to de-install spl-dkms, reinstall it, and fail with the above error :-)

 

Any ideas at this point? :-)

Im having the same problem on Buster right now but older versions of zfs-dkms on Ubuntu have no problems. I will have a proper look this evening when I dont have real work interrupting me

Link to comment
Share on other sites

@SvenHz First you should purge any zfs packages to try from scratch again.

 

Then if you want to install from backports you need configure apt to use backports for all the ZFS dependencies.

 

Create file  /etc/apt/preferences.d/90_zfs with following content :

Package: libnvpair1linux libuutil1linux libzfs2linux libzpool2linux spl-dkms zfs-dkms zfs-test zfsutils-linux zfsutils-linux-dev zfs-zed
Pin: release n=buster-backports
Pin-Priority: 990

Then redo the install, you won't need to specify "-t buster-backports" in your apt command.

 

I also tested on Buster yesterday and it works without issue.

Link to comment
Share on other sites

Even with a clean image (downloaded from kobol.io dated August 2nd) I still get the error in spl-dkms.

 

Steps to reproduce:

 

start with image
#apt-get update
#apt-get upgrade
edit 90_zfs for backports
#apt-get install linux-headers-next-mvebu
#apt-get install spl-dkms > same error
 

# more  /var/lib/dkms/spl/0.7.13/build/make.log
DKMS make.log for spl-0.7.13 for kernel 4.19.63-mvebu (armv7l)
Tue Sep 10 14:52:48 UTC 2019
make: *** No targets specified and no makefile found.  Stop.

#

Link to comment
Share on other sites

@SvenHz @gprovost There seems to be a problem with kernels newer than 4.14. I started investigating last night and tried a lot of different kernel / zfs-dkms versions. All my live deployments of ZFS are on rk3328 and rk3399 devices with kernel 4.4 and with 4.4 everything seems to work fine. I have not managed a successful, apt automated, build on a kernel newer than 4.14. Its not an arm specific issue, even on x86_64 Im having problems with more recent kernels. Im going to spend a couple of hours on it tonight and see if I can narrow down a cause

Link to comment
Share on other sites

This is the log of a zfs install, setup and test on a fresh Armbian Buster on Helios4, I did yesterday.

Note 1 : I set up APT (/etc/apt/preferences.d/90_zfs) to force using backports

Note 2 : You will see an error when the zfsutils-linux postinst script try to start zfs-service. Just ignore it, after systemctl daemon-reload and reboot, allow initialize properly.

Spoiler



user@helios4:~$ sudo apt-get install linux-headers-next-mvebu
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following NEW packages will be installed:
  linux-headers-next-mvebu
0 upgraded, 1 newly installed, 0 to remove and 44 not upgraded.
Need to get 10.6 MB of archives.
After this operation, 74.0 MB of additional disk space will be used.
Get:1 https://apt.armbian.com buster/main armhf linux-headers-next-mvebu armhf 5.91 [10.6 MB]
Fetched 10.6 MB in 15s (726 kB/s)                                                                                                                  
Selecting previously unselected package linux-headers-next-mvebu.
(Reading database ... 32967 files and directories currently installed.)
Preparing to unpack .../linux-headers-next-mvebu_5.91_armhf.deb ...
Unpacking linux-headers-next-mvebu (5.91) ...
Setting up linux-headers-next-mvebu (5.91) ...
Compiling headers - please wait ...

user@helios4:~$ sudo apt-get install zfs-dkms zfsutils-linux
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  dkms file libelf-dev libmagic-mgc libmagic1 libnvpair1linux libuutil1linux libzfs2linux libzpool2linux spl-dkms
Suggested packages:
  python3-apport menu spl nfs-kernel-server samba-common-bin zfs-initramfs | zfs-dracut
Recommended packages:
  fakeroot linux-headers-686-pae | linux-headers-amd64 | linux-headers-generic | linux-headers zfs-zed
The following NEW packages will be installed:
  dkms file libelf-dev libmagic-mgc libmagic1 libnvpair1linux libuutil1linux libzfs2linux libzpool2linux spl-dkms zfs-dkms zfsutils-linux
0 upgraded, 12 newly installed, 0 to remove and 44 not upgraded.
Need to get 3424 kB of archives.
After this operation, 23.2 MB of additional disk space will be used.
Do you want to continue? [Y/n] 
Get:1 http://cdn-fastly.deb.debian.org/debian buster/main armhf dkms all 2.6.1-4 [74.4 kB]
Get:2 http://cdn-fastly.deb.debian.org/debian buster/main armhf libmagic-mgc armhf 1:5.35-4 [242 kB]
Get:3 http://cdn-fastly.deb.debian.org/debian buster/main armhf libmagic1 armhf 1:5.35-4 [110 kB]
Get:4 http://cdn-fastly.deb.debian.org/debian buster/main armhf file armhf 1:5.35-4 [65.4 kB]
Get:5 http://cdn-fastly.deb.debian.org/debian buster/main armhf libelf-dev armhf 0.176-1.1 [69.2 kB]
Get:6 http://cdn-fastly.deb.debian.org/debian buster-backports/main armhf spl-dkms all 0.7.13-1~bpo10+1 [400 kB]
Get:7 http://cdn-fastly.deb.debian.org/debian buster-backports/contrib armhf zfs-dkms all 0.7.13-1~bpo10+1 [1402 kB]
Get:8 http://cdn-fastly.deb.debian.org/debian buster-backports/contrib armhf libuutil1linux armhf 0.7.13-1~bpo10+1 [54.4 kB]
Get:9 http://cdn-fastly.deb.debian.org/debian buster-backports/contrib armhf libnvpair1linux armhf 0.7.13-1~bpo10+1 [43.9 kB]
Get:10 http://cdn-fastly.deb.debian.org/debian buster-backports/contrib armhf libzpool2linux armhf 0.7.13-1~bpo10+1 [548 kB]
Get:11 http://cdn-fastly.deb.debian.org/debian buster-backports/contrib armhf libzfs2linux armhf 0.7.13-1~bpo10+1 [130 kB]
Get:12 http://cdn-fastly.deb.debian.org/debian buster-backports/contrib armhf zfsutils-linux armhf 0.7.13-1~bpo10+1 [284 kB]
Fetched 3424 kB in 4s (868 kB/s)        
Preconfiguring packages ...
Selecting previously unselected package dkms.
(Reading database ... 57533 files and directories currently installed.)
Preparing to unpack .../0-dkms_2.6.1-4_all.deb ...
Unpacking dkms (2.6.1-4) ...
Selecting previously unselected package libmagic-mgc.
Preparing to unpack .../1-libmagic-mgc_1%3a5.35-4_armhf.deb ...
Unpacking libmagic-mgc (1:5.35-4) ...
Selecting previously unselected package libmagic1:armhf.
Preparing to unpack .../2-libmagic1_1%3a5.35-4_armhf.deb ...
Unpacking libmagic1:armhf (1:5.35-4) ...
Selecting previously unselected package file.
Preparing to unpack .../3-file_1%3a5.35-4_armhf.deb ...
Unpacking file (1:5.35-4) ...
Selecting previously unselected package libelf-dev:armhf.
Preparing to unpack .../4-libelf-dev_0.176-1.1_armhf.deb ...
Unpacking libelf-dev:armhf (0.176-1.1) ...
Selecting previously unselected package spl-dkms.
Preparing to unpack .../5-spl-dkms_0.7.13-1~bpo10+1_all.deb ...
Unpacking spl-dkms (0.7.13-1~bpo10+1) ...
Setting up dkms (2.6.1-4) ...
Setting up libmagic-mgc (1:5.35-4) ...
Setting up libmagic1:armhf (1:5.35-4) ...
Setting up file (1:5.35-4) ...
Setting up libelf-dev:armhf (0.176-1.1) ...
Setting up spl-dkms (0.7.13-1~bpo10+1) ...
Loading new spl-0.7.13 DKMS files...
Building for 4.19.63-mvebu
Building initial module for 4.19.63-mvebu
Done.

spl.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/4.19.63-mvebu/updates/dkms/

splat.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/4.19.63-mvebu/updates/dkms/

depmod...
Warning: The unit file, source configuration file or drop-ins of systemd-modules-load.service changed on disk. Run 'systemctl daemon-reload' to reload units.

DKMS: install completed.
Selecting previously unselected package zfs-dkms.
(Reading database ... 57954 files and directories currently installed.)
Preparing to unpack .../0-zfs-dkms_0.7.13-1~bpo10+1_all.deb ...
Unpacking zfs-dkms (0.7.13-1~bpo10+1) ...
Selecting previously unselected package libuutil1linux.
Preparing to unpack .../1-libuutil1linux_0.7.13-1~bpo10+1_armhf.deb ...
Unpacking libuutil1linux (0.7.13-1~bpo10+1) ...
Selecting previously unselected package libnvpair1linux.
Preparing to unpack .../2-libnvpair1linux_0.7.13-1~bpo10+1_armhf.deb ...
Unpacking libnvpair1linux (0.7.13-1~bpo10+1) ...
Selecting previously unselected package libzpool2linux.
Preparing to unpack .../3-libzpool2linux_0.7.13-1~bpo10+1_armhf.deb ...
Unpacking libzpool2linux (0.7.13-1~bpo10+1) ...
Selecting previously unselected package libzfs2linux.
Preparing to unpack .../4-libzfs2linux_0.7.13-1~bpo10+1_armhf.deb ...
Unpacking libzfs2linux (0.7.13-1~bpo10+1) ...
Selecting previously unselected package zfsutils-linux.
Preparing to unpack .../5-zfsutils-linux_0.7.13-1~bpo10+1_armhf.deb ...
Unpacking zfsutils-linux (0.7.13-1~bpo10+1) ...
Setting up libuutil1linux (0.7.13-1~bpo10+1) ...
Setting up zfs-dkms (0.7.13-1~bpo10+1) ...
WARNING: Building ZFS module on a 32-bit kernel.
Loading new zfs-0.7.13 DKMS files...
Building for 4.19.63-mvebu
Building initial module for 4.19.63-mvebu
Done.

zavl.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/4.19.63-mvebu/updates/dkms/

znvpair.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/4.19.63-mvebu/updates/dkms/

zunicode.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/4.19.63-mvebu/updates/dkms/

zcommon.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/4.19.63-mvebu/updates/dkms/

zfs.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/4.19.63-mvebu/updates/dkms/

zpios.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/4.19.63-mvebu/updates/dkms/

icp.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/4.19.63-mvebu/updates/dkms/

depmod...
Warning: The unit file, source configuration file or drop-ins of systemd-modules-load.service changed on disk. Run 'systemctl daemon-reload' to reload units.

DKMS: install completed.
Setting up libnvpair1linux (0.7.13-1~bpo10+1) ...
Setting up libzpool2linux (0.7.13-1~bpo10+1) ...
Setting up libzfs2linux (0.7.13-1~bpo10+1) ...
Setting up zfsutils-linux (0.7.13-1~bpo10+1) ...
Created symlink /etc/systemd/system/zfs-import.target.wants/zfs-import-cache.service -> /lib/systemd/system/zfs-import-cache.service.
Created symlink /etc/systemd/system/zfs-mount.service.wants/zfs-import.target -> /lib/systemd/system/zfs-import.target.
Created symlink /etc/systemd/system/zfs.target.wants/zfs-import.target -> /lib/systemd/system/zfs-import.target.
Created symlink /etc/systemd/system/zfs-share.service.wants/zfs-mount.service -> /lib/systemd/system/zfs-mount.service.
Created symlink /etc/systemd/system/zfs.target.wants/zfs-mount.service -> /lib/systemd/system/zfs-mount.service.
Created symlink /etc/systemd/system/zfs.target.wants/zfs-share.service -> /lib/systemd/system/zfs-share.service.
Created symlink /etc/systemd/system/multi-user.target.wants/zfs.target -> /lib/systemd/system/zfs.target.
zfs-import-scan.service is a disabled or a static unit, not starting it.
Job for zfs-mount.service failed because the control process exited with error code.
See "systemctl status zfs-mount.service" and "journalctl -xe" for details.
invoke-rc.d: initscript zfs-mount, action "start" failed.
* zfs-mount.service - Mount ZFS filesystems
   Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Mon 2019-09-09 07:39:31 UTC; 22ms ago
     Docs: man:zfs(8)
  Process: 7556 ExecStart=/sbin/zfs mount -a (code=exited, status=1/FAILURE)
 Main PID: 7556 (code=exited, status=1/FAILURE)

Sep 09 07:39:21 helios4 systemd[1]: Starting Mount ZFS filesystems...
Sep 09 07:39:31 helios4 zfs[7556]: /dev/zfs and /proc/self/mounts are required.
Sep 09 07:39:31 helios4 zfs[7556]: Try running 'udevadm trigger' and 'mount -t proc proc /proc' as root.
Sep 09 07:39:31 helios4 systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE
Sep 09 07:39:31 helios4 systemd[1]: zfs-mount.service: Failed with result 'exit-code'.
Sep 09 07:39:31 helios4 systemd[1]: Failed to start Mount ZFS filesystems.
dpkg: error processing package zfsutils-linux (--configure):
 installed zfsutils-linux package post-installation script subprocess returned error exit status 1
Processing triggers for systemd (241-5) ...
Processing triggers for man-db (2.8.5-2) ...
Processing triggers for libc-bin (2.28-10) ...
Errors were encountered while processing:
 zfsutils-linux
E: Sub-process /usr/bin/dpkg returned an error code (1)

user@helios4:~$ sudo systemctl daemon-reload 

user@helios4:~$ sudo reboot


$ ssh 10.10.10.1
user@10.10.10.1's password: 
 _   _      _ _           _  _   
| | | | ___| (_) ___  ___| || |  
| |_| |/ _ \ | |/ _ \/ __| || |_ 
|  _  |  __/ | | (_) \__ \__   _|
|_| |_|\___|_|_|\___/|___/  |_|  
                                 
Welcome to Debian Buster with Armbian Linux 4.19.63-mvebu
System load:   1.11 0.30 0.10  	Up time:       0 min		
Memory usage:  4 % of 2015MB 	IP:            10.10.10.1
CPU temp:      55°C           	Ambient temp:  36°C           	
Usage of /:    10% of 15G    	

[ 0 security updates available, 44 updates total: apt upgrade ]
Last check: 2019-09-09 07:41

Last login: Mon Sep  9 05:16:46 2019 from 10.10.10.254

user@helios4:~$ systemctl status zfs-mount.service
* zfs-mount.service - Mount ZFS filesystems
   Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)
   Active: active (exited) since Mon 2019-09-09 07:41:22 UTC; 1min 44s ago
     Docs: man:zfs(8)
  Process: 412 ExecStart=/sbin/zfs mount -a (code=exited, status=0/SUCCESS)
 Main PID: 412 (code=exited, status=0/SUCCESS)

Sep 09 07:41:21 helios4 systemd[1]: Starting Mount ZFS filesystems...
Sep 09 07:41:22 helios4 systemd[1]: Started Mount ZFS filesystems.

user@helios4:~$ sudo zpool create stock mirror /dev/sda /dev/sdb mirror /dev/sdc /dev/sdd 

user@helios4:~$ sudo zpool status
  pool: stock
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	stock       ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    sda     ONLINE       0     0     0
	    sdb     ONLINE       0     0     0
	  mirror-1  ONLINE       0     0     0
	    sdc     ONLINE       0     0     0
	    sdd     ONLINE       0     0     0

errors: No known data errors

user@helios4:~$ sudo zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
stock   222G   148K   222G         -     0%     0%  1.00x  ONLINE  -

user@helios4:~$ sudo zfs set dedup=off stock

user@helios4:~$ sudo zfs create stock/stuff

user@helios4:~$ sudo dd if=/dev/zero of=/stock/stuff/test.data bs=1M count=4000
4000+0 records in
4000+0 records out
4194304000 bytes (4.2 GB, 3.9 GiB) copied, 47.5516 s, 88.2 MB/s

user@helios4:~$ sudo zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
stock   222G  3.91G   218G         -     0%     1%  1.00x  ONLINE  -

user@helios4:~$ sudo dd if=/stock/stuff/test.data of=/dev/null bs=1M
4000+0 records in
4000+0 records out
4194304000 bytes (4.2 GB, 3.9 GiB) copied, 18.4318 s, 228 MB/s


 

 

Edited by TRS-80
put long shell results in spoiler
Link to comment
Share on other sites

Unfortunately, on the Kobol supplied image Armbian_5.91_Helios4_Debian_stretch_default_4.14.135.7z, I get exactly the same problem:

 

(clean install)

root@helios4:~# apt-get install linux-headers-mvebu > works, headers for 4.14

root@helios4:~# apt-get install spl-dkms

[...snip...]

Loading new spl-0.7.12 DKMS files...
Building for 4.14.135-mvebu
Building initial module for 4.14.135-mvebu
configure: error: *** Unable to build an empty module.
Error! Bad return status for module build on kernel: 4.14.135-mvebu (armv7l)
Consult /var/lib/dkms/spl/0.7.12/build/make.log for more information.
root@helios4:~#

 

I will try @gprovost 's steps now, just missed it, many thanks for sharing!

Link to comment
Share on other sites

@gprovost what image are you using exactly? I am using the ones from kobol.io and still have the same issue, see transcript below. This is  Armbian_5.91_Helios4_Debian_buster_next_4.19.63.7z from kobol.io dated Aug 2, 2019.

 

Spoiler



root@helios4:/etc/apt/preferences.d# apt-get install linux-headers-next-mvebu
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
  linux-headers-next-mvebu
0 upgraded, 1 newly installed, 0 to remove and 50 not upgraded.
Need to get 10.6 MB of archives.
After this operation, 74.0 MB of additional disk space will be used.
Get:1 https://apt.armbian.com buster/main armhf linux-headers-next-mvebu armhf 5.91 [10.6 MB]
Fetched 10.6 MB in 2s (4750 kB/s)
Selecting previously unselected package linux-headers-next-mvebu.
(Reading database ... 32124 files and directories currently installed.)
Preparing to unpack .../linux-headers-next-mvebu_5.91_armhf.deb ...
Unpacking linux-headers-next-mvebu (5.91) ...
Setting up linux-headers-next-mvebu (5.91) ...
Compiling headers - please wait ...
root@helios4:/etc/apt/preferences.d# apt-get install zfs-dkms zfsutils-linux
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  dkms file libelf-dev libmagic-mgc libmagic1 libnvpair1linux libuutil1linux libzfs2linux libzpool2linux spl-dkms
  zlib1g-dev
Suggested packages:
  python3-apport menu spl nfs-kernel-server samba-common-bin zfs-initramfs | zfs-dracut
Recommended packages:
  fakeroot linux-headers-686-pae | linux-headers-amd64 | linux-headers-generic | linux-headers zfs-zed
The following NEW packages will be installed:
  dkms file libelf-dev libmagic-mgc libmagic1 libnvpair1linux libuutil1linux libzfs2linux libzpool2linux spl-dkms
  zfs-dkms zfsutils-linux zlib1g-dev
0 upgraded, 13 newly installed, 0 to remove and 50 not upgraded.
Need to get 3631 kB of archives.
After this operation, 23.6 MB of additional disk space will be used.
Do you want to continue? [Y/n]
Get:1 http://cdn-fastly.deb.debian.org/debian buster/main armhf dkms all 2.6.1-4 [74.4 kB]
Get:2 http://cdn-fastly.deb.debian.org/debian buster/main armhf libmagic-mgc armhf 1:5.35-4 [242 kB]
Get:3 http://cdn-fastly.deb.debian.org/debian buster/main armhf libmagic1 armhf 1:5.35-4 [110 kB]
Get:4 http://cdn-fastly.deb.debian.org/debian buster/main armhf file armhf 1:5.35-4 [65.4 kB]
Get:5 http://cdn-fastly.deb.debian.org/debian buster/main armhf zlib1g-dev armhf 1:1.2.11.dfsg-1 [207 kB]
Get:6 http://cdn-fastly.deb.debian.org/debian buster/main armhf libelf-dev armhf 0.176-1.1 [69.2 kB]
Get:7 http://cdn-fastly.deb.debian.org/debian buster-backports/main armhf spl-dkms all 0.7.13-1~bpo10+1 [400 kB]
Get:8 http://cdn-fastly.deb.debian.org/debian buster-backports/contrib armhf zfs-dkms all 0.7.13-1~bpo10+1 [1402 kB]
Get:9 http://cdn-fastly.deb.debian.org/debian buster-backports/contrib armhf libuutil1linux armhf 0.7.13-1~bpo10+1 [54.4                                                                                                                      kB]
Get:10 http://cdn-fastly.deb.debian.org/debian buster-backports/contrib armhf libnvpair1linux armhf 0.7.13-1~bpo10+1 [43                                                                                                                     .9 kB]
Get:11 http://cdn-fastly.deb.debian.org/debian buster-backports/contrib armhf libzpool2linux armhf 0.7.13-1~bpo10+1 [548                                                                                                                      kB]
Get:12 http://cdn-fastly.deb.debian.org/debian buster-backports/contrib armhf libzfs2linux armhf 0.7.13-1~bpo10+1 [130 k                                                                                                                     B]
Get:13 http://cdn-fastly.deb.debian.org/debian buster-backports/contrib armhf zfsutils-linux armhf 0.7.13-1~bpo10+1 [284                                                                                                                      kB]
Fetched 3631 kB in 1s (3002 kB/s)
Preconfiguring packages ...
Selecting previously unselected package dkms.
(Reading database ... 56690 files and directories currently installed.)
Preparing to unpack .../0-dkms_2.6.1-4_all.deb ...
Unpacking dkms (2.6.1-4) ...
Selecting previously unselected package libmagic-mgc.
Preparing to unpack .../1-libmagic-mgc_1%3a5.35-4_armhf.deb ...
Unpacking libmagic-mgc (1:5.35-4) ...
Selecting previously unselected package libmagic1:armhf.
Preparing to unpack .../2-libmagic1_1%3a5.35-4_armhf.deb ...
Unpacking libmagic1:armhf (1:5.35-4) ...
Selecting previously unselected package file.
Preparing to unpack .../3-file_1%3a5.35-4_armhf.deb ...
Unpacking file (1:5.35-4) ...
Selecting previously unselected package zlib1g-dev:armhf.
Preparing to unpack .../4-zlib1g-dev_1%3a1.2.11.dfsg-1_armhf.deb ...
Unpacking zlib1g-dev:armhf (1:1.2.11.dfsg-1) ...
Selecting previously unselected package libelf-dev:armhf.
Preparing to unpack .../5-libelf-dev_0.176-1.1_armhf.deb ...
Unpacking libelf-dev:armhf (0.176-1.1) ...
Selecting previously unselected package spl-dkms.
Preparing to unpack .../6-spl-dkms_0.7.13-1~bpo10+1_all.deb ...
Unpacking spl-dkms (0.7.13-1~bpo10+1) ...
Setting up dkms (2.6.1-4) ...
Setting up libmagic-mgc (1:5.35-4) ...
Setting up libmagic1:armhf (1:5.35-4) ...
Setting up file (1:5.35-4) ...
Setting up zlib1g-dev:armhf (1:1.2.11.dfsg-1) ...
Setting up libelf-dev:armhf (0.176-1.1) ...
Setting up spl-dkms (0.7.13-1~bpo10+1) ...
Loading new spl-0.7.13 DKMS files...
Building for 4.19.63-mvebu
Building initial module for 4.19.63-mvebu
configure: error: *** Unable to build an empty module.
Error! Bad return status for module build on kernel: 4.19.63-mvebu (armv7l)
Consult /var/lib/dkms/spl/0.7.13/build/make.log for more information.
dpkg: error processing package spl-dkms (--configure):
 installed spl-dkms package post-installation script subprocess returned error exit status 10
Errors were encountered while processing:
 spl-dkms
E: Sub-process /usr/bin/dpkg returned an error code (1)
root@helios4:/etc/apt/preferences.d#


 

 

Edited by TRS-80
put long shell results in spoiler
Link to comment
Share on other sites

I just tried from scratch again using with Armbian_5.91_Helios4_Debian_buster_next_4.19.63.7z that I downloaded from our wiki, and it works with the same result than what I reported first. Pretty crazy we don't have the same result.

 

Helios4_Armbian_Buster_ZoL_Install.log

 

I can see there are 2 DKMS issues currently opened on https://github.com/armbian/build/issues

Not sure it's related but there might be effectively something to dig in there.

 

I also witnessed some unrelated APT repo issue when doing apt-get update. Had to do it twice as you will see in the file.log, I will rise an issue for that. This was just a clock issue.

Link to comment
Share on other sites

 

On 9/9/2019 at 7:29 PM, qstaq said:

I wrote a simple script to install ZFS support and to set some default options for compression, arc, cache, etc for low mem / cpu devices. The same script works without changes on Debian Buster, Ubuntu 16.04, 18.04 & 19.04 as the process is identical on all OS variants as far as I can discern

 


@qstaq this sounds interesting. Would you be willing to share these scripts/settings either in public or private? I reckon this could be very useful, especially for beginners like me.

Link to comment
Share on other sites

On 12/8/2019 at 7:40 AM, dkxls said:

 


@qstaq this sounds interesting. Would you be willing to share these scripts/settings either in public or private? I reckon this could be very useful, especially for beginners like me.

The problem with ZFS on Helios4 is not so much the memory (2GB is plenty) but the limited address space, in particular the in-kernel virtual address space which ZFS uses via vmalloc (which is the Solaris way to do it not the linux way). Due to normal usage and address space fragmentation thereof, the address space can get exhausted and the system will simply lock up. It's not an issue on 64bit systems but on 32bit systems with vanilla settings this happens rather quickly. So you want to configure stuff to fight this.

 

The absolute bare minimum is to increase the size of the kernel address space allocated to vmalloc via the kernel command line. With the default 1G/3G split between kernel and user space (CONFIG_VMSPLIT_3G) you can pass in vmalloc=512M or even vmalloc=768M to the kernel. Even with this setting and nightly reboots I've been experiencing the occasional lock-up.

 

So better still, you build your own kernel with CONFIG_VMSPLIT_2G (or even CONFIG_VMSPLIT_1G if you really want to dedicate the whole device to storage) to increase the kernel address space at which point you can increase vmalloc further to 50-75% of the kernel space. This is probably the settings that helps the most: my Helios4 has been running stable for a couple of months without scheduled reboots.

 

For configuring the zfs module I put the following into /etc/modprobe.d/zfs.conf

options zfs zfs_arc_max=805306368
options zfs zfs_arc_meta_limit_percent=90
options zfs zfs_arc_meta_min=33554432
options zfs zfs_arc_min=134217728
options zfs zfs_dirty_data_max_percent=20
options zfs zfs_dirty_data_max_max_percent=30
options zfs zfs_dirty_data_sync=16777216
options zfs zfs_txg_timeout=2

I don't quite remember my rationale behind these but you can read up on them here https://github.com/zfsonlinux/zfs/wiki/ZFS-on-Linux-Module-Parameters . You probably also don't want to go crazy on the recsize of your datasets.

 

Disclaimer: I'm not and expert, so take my explanation and also the settings with a grain of salt.

Edited by Steven Keuchel
Link to comment
Share on other sites

And I'm back, and back to square one.
After 3 months of enjoying ZFS 0.7.13 on Helios4, I noticed that 0.8.2 was available in the backports. So, after doing apt-get update, I did apt-get upgrade and lost my ZFS.
Basically I have spent the whole afternoon on this and I am fully stuck.
If I use apt-get to try and reinstall ZFS, I end up with the "configure: error: *** Unable to build an empty module."
If I manually use dkms to build the ZFS 0.8.2 from the sources left around by apt-get, I get a successful build, however "mobprobe zfs" gives me a Exec Format Error. dmesg shows:

zlua: section 4 reloc 44 sym 'longjmp': unsupported interworking call (Thumb -> ARM)

 

Does anyone have 0.8.2 succesfully running on Helios4?

 

Update. If I remove the 90_zfs apt preference file, I can try to install 0.7.12. However that also breaks (luckily a lot faster so I can debug).

The problem seems to be that my dkms is now broken.

# more /var/lib/dkms/spl/0.7.12/build/make.log
DKMS make.log for spl-0.7.12 for kernel 4.19.63-mvebu (armv7l)
Sun Dec 22 22:32:42 CET 2019
make: *** No targets specified and no makefile found.  Stop.
Relevant bits from /var/lib/dkms/spl/0.7.12/build/config.log:

configure:9294: cp conftest.c build && make modules -C /lib/modules/4.19.63-mvebu/build EXTRA_CFLAGS=-Werror-implicit-function-declaration   M=/var/lib/dkms/spl/0.7.12/build/build
Makefile:613: arch//Makefile: No such file or directory
make: *** No rule to make target 'arch//Makefile'.  Stop.

I found
https://forum.armbian.com/topic/10344-postinst-of-dkms-package-is-broken-on-armbian/

 

 

and patched my common.postinst file. I have now been successful with 0.7.12 and have re-imported my pool. All seems fine.

I would love to try 0.8.2 as soon as someone confirms that it works :-)

 

Link to comment
Share on other sites

@Steven Keuchel sorry for my late reply and thanks a lot for your advice regarding the kernel memory configuration. Much appreciated!

I will probably start with a readily available kernel and set the correct parameters as suggested, and will later on look into compiling my own kernel. Do you have any advice regarding the compilation, any pointer to documentation or even a config file would be greatly appreciated here. :)


@SvenHz Thanks for sharing your experience with zfs 0.8.2! I will stick to the 0.7 series if at all possible then for now as well.


I think it would be good to have some kind of a documentation/guidelines for setting up zfs on a Helios4, as currently all these information seem to be distributed and one needs to go through various forum threads to get this working.

@gprovost Have you considered to put this to the Helios4 wiki, maybe marked as an advanced or experimental setup for people that are keen to tinker around with this kind of thing?

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines