0
tkaiser

ZFS on Armbian

Recommended Posts

Not just after Oracle killed Solaris few weeks ago the question arose how to continue with ZFS (me being an absolute ZFS fanboi). After reading ZoL (ZFS on Linux) 0.7 release notes it was just a matter of time to give it a try. On an energy efficient x64 board supporting Intel's QuickAssist (QTA) and also on the most appropriate ARM platform we support for these use cases: Clearfog Pro or Helios4 soon. Release notes state to implement a lot of performance improvements (check the performance section above yourself) so let's give it a try:

 

Since ZoL is said to be compatible with kernel 4.12 max and we switched already to 4.13 with Armada 38x the best idea is to start with current OMV image for Clearfogs since relying on 4.12 kernel. So with a Clearfog it's just grabbing http://kaiser-edv.de/tmp/NumpU8/OMV_3_0_87_Clearfog_Pro_4.12.9.img.xz, letting it boot, wait for the reboot and then access Clearfog through serial console (or access OMV web UI and enable SSH root login -- see release notes). I built most recent 4.12 kernel plus headers available here http://kaiser-edv.de/tmp/NumpU8/zfs-on-linux/

 

Getting ZFS to work is basically the following: After booting the OMV image ('next' images from dl.armbian.com might work too -- not tested) install headers and kernel package from last link above (you'll be then on kernel 4.12.13), then run armbian-config --> Armbian --> 'Hold -- Freeze kernel and board support packages' to prevent up-/downgrading kernel with 'apt upgrade'. And then simply follow https://github.com/zfsonlinux/zfs/wiki/Building-ZFS

 

With Armbian/OMV it's just

wget http://kaiser-edv.de/tmp/NumpU8/zfs-on-linux/linux-image-next-mvebu_5.32_armhf.deb
wget http://kaiser-edv.de/tmp/NumpU8/zfs-on-linux/linux-headers-next-mvebu_5.32_armhf.deb
dpkg -i linux-image-next-mvebu_5.32_armhf.deb linux-headers-next-mvebu_5.32_armhf.deb
apt install build-essential autoconf libtool gawk alien fakeroot zlib1g-dev uuid-dev \
	libattr1-dev libblkid-dev libselinux-dev libudev-dev libdevmapper-dev lsscsi ksh

Then follow the instructions above. I did in both directories also a 'make install' and finally executed 'ldconfig' to get the libraries that are installed below /usr/local/lib/ working (see here for details). A final 'echo zfs >> /etc/modules' followed by a reboot and ZFS is ready:

root@clearfogpro:~# zpool create -f test raidz sda sdb sdc
root@clearfogpro:~# zpool list
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
test   334G  1.55M   334G         -     0%     0%  1.00x  ONLINE  -
root@clearfogpro:~# zpool status
  pool: test
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	test        ONLINE       0     0     0
	  raidz1-0  ONLINE       0     0     0
	    sda     ONLINE       0     0     0
	    sdb     ONLINE       0     0     0
	    sdc     ONLINE       0     0     0

errors: No known data errors
root@clearfogpro:~# zfs create -o mountpoint=/var/volumes/raidz -o compression=on test/raidz

These are 3 different 120 GB SSDs (SMART output for all of them) as RAIDZ behind 3 different SATA controllers (Transcend in M.2 slot served by Armada 38x directly, Samsung EVO840 attached to an ASM1062 and the Intel 540 behind a Marvell 88SE9215). Armbianmonitor -u: http://sprunge.us/CUHS

 

Focus of the following tests/research is performance and reliability since it's just a shame that in 2017 users still want to use mdraid/RAID5. A first quick test with 'iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2' failed, without the '-I' iozone is running but results are discouraging (as expected):

                                                              random    random
              kB  reclen    write  rewrite    read    reread    read     write
          102400       4     5033     4856   242613   243166    18853      168
          102400      16     4981     5104   219828   354587    79886      664
          102400     512     5006     5069   363717   366405   416668     5330
          102400    1024     5029     5068   337919   326607   370943     5335
          102400   16384     4856     4878   356314   357925   357245     5087

I better stop with Armbian/Clearfog now and try this on x64 first ;)

Share this post


Link to post
Share on other sites

Any updates on this?

 

There has been significant progress on zfsonlinux recently, 0.7.11 resolves a few major issues and greatly improves performance on x86-64, I have a feeling it may be ready for "production" testing on ARM -- need to find a suitable board first!

 

edit: Have you tried the Marvell ARMADA A8040 series from SolidRun? Supposedly good networking and storage controllers.

Share this post


Link to post
Share on other sites
On 9/14/2017 at 3:20 PM, tkaiser said:

Not just after Oracle killed Solaris few weeks ago the question arose how to continue with ZFS (me being an absolute ZFS fanboi). After reading ZoL (ZFS on Linux) 0.7 release notes it was just a matter of time to give it a try. On an energy efficient x64 board supporting Intel's QuickAssist (QTA) and also on the most appropriate ARM platform we support for these use cases: Clearfog Pro or Helios4 soon. Release notes state to implement a lot of performance improvements (check the performance section above yourself) so let's give it a try:

 

ZFS on big iron - pretty cool on Sun/Oracle (snoracle?) - ZFS has a lot of options - and checks off boxes for many on the desktop - but in the constrained environment of SBC's that Armbian is targeting, perhaps not the best choice for a primary file system... one needs a fair amount of RAM on ZFS, and memory is a bit precious on SBC's...

 

XFS is interesting for mounted drives - perhaps not as a primary, but it's fast at read/writes with dirs that have many files - think media player applications...

 

Otherwise - ext3/4 is still a very good choice for SBC's running on SD Cards

Share this post


Link to post
Share on other sites
0