Jump to content

dkxls

Members
  • Posts

    8
  • Joined

  • Last visited

Reputation Activity

  1. Like
    dkxls got a reaction from gprovost in ZFS on Helios4   
    Great! Thanks @SvenHz for sharing your experience!
     
     
    Yes, it's very likely the issue I mentioned above with binutils, for the Debian Buster image I am actually sure as the stable repositories still have version 2.31 which is too old.
     
    The simple solution here is to use the packages from the Debian testing repository. This can be simply done by adding two files:
     /etc/apt/sources.list.d/testing.list: deb http://httpredir.debian.org/debian testing main #deb-src http://httpredir.debian.org/debian testing main  
     /etc/apt/preferences.d/10_testing: Package: * Pin: release a=testing Pin-Priority: -10 Package: binutils binutils-common libbinutils binutils-arm-linux-gnueabihf libctf0 Pin: release a=testing Pin-Priority: 900 After that a simple "apt update && apt upgrade" should do the trick. With these settings, apt will only prefer the listed binutils packages from testing and keep all other packages as they are.
  2. Like
    dkxls reacted to SvenHz in ZFS on Helios4   
    Here is an update. TL;DR: happy to report that zfs 2.0.2 seems to work fine on Armbian Hirsute.
     
    I am planning a reinstall of our "family production NAS" with the aim to do as little hacking/customization as possible to get ZFS to work.
     
    First I tried both the current (Mar 2021) Buster and Focal images but failed at the 'apt install zfs-dkms' step due to "exec format error" and basically failing to produce a working kernel module for ZFS. I suspect this is caused by the older binutils on both images.
     
    So then I downloaded the Ubuntu Hirsute unstable image with kernel 5.10.23 of 13 Mar 2021. Supplied binutils is 2.36 I believe.

    After the clean install, these are the steps I took:
     
    # apt update
    # apt upgrade
    Use armbian-config to install kernel headers (turns out to be 5.10.17)
    Use armbian-config to downgrade kernel 5.10.23 to 5.10.17 to match old headers...
    # apt install zfs-dkms
     

    apt install zfsutils-linux
     
     
    After this and a reboot, I successfully imported an existing (zfs 0.7.x) pool.

    So far so good. I will wait until we are stable with Hirsute.
     
  3. Like
    dkxls got a reaction from gprovost in ZFS on Helios4   
    I just recently upgraded my Helios4 to Armbian 20.11.6 with kernel 5.9.14-mvebu. I encountered some minor hiccups, which turn out to be either a kernel or binutils bug, but the binutils developers fixed this in the 2.35 release which you can install from testing (see https://github.com/openzfs/zfs/issues/11444 for the details).
     
    I haven't tested this on a 5.10-kernel, nor have I made the switch to ZFS 2.0 yet. But according to @Igor this seems to work also just fine. @SvenHz, if you get this working on your Helios4, I would appreciate if you could report this here, just to keep track of any issues (or the absence thereof).
  4. Like
    dkxls reacted to Igor in ZFS on Helios4   
    There are problems with 32bit ZFS support and there is nothing more we can do. 
    https://github.com/armbian/build/pull/2547 
    https://github.com/openzfs/zfs/issues/9392
    Broken od Debian, buildable on Ubuntu based Armbian. At least on latest (LTS) kernel 5.10.y

    Upgrade to 5.10.y on 32bit target doesn't work automatically ...
     
     after reboot you need to run 
    dpkg-reconfigure zfs-dkms and select that you agree with 32b troubles but at the end, it works. I could load my pool without issues.
     
  5. Like
    dkxls reacted to Steven Keuchel in ZFS on Helios4   
    The 768MB arc_max was for a vmalloc of 1024MB and I was using zfs 0.7.12 when I wrote that, but since then I have switched to 0.8 (with Thumb2 patches from github). I have looked at this again (now with 0.8), and it seems the situation has indeed changed. When looking at vmalloc of zfs (spl actually) via /proc/vmallocinfo it seems to hover around 100-150MB with about 220MB being the highest I've seen and also the total address space occupied by all allocations has always been under 300MB and usually far below. So it seems rather compact with only little fragmentation happening. So zfs appears to use less vmalloc than I assumed and more lowmem, so I will have to revise my settings, because they clearly don't make sense anymore.
  6. Like
    dkxls got a reaction from TRS-80 in ZFS on Helios4   
    @SvenHz I made some progress on tracking down this issue with zfs 0.8 on the Helios4, more precisely James Robertson (@jsrlabs) identified and fixed the missing Thumb-2 instructions used by the Armbian kernel, see the discussion on the ZFS mailinglist.
     
    I went ahead and opened a bug report (zfsonlinux/zfs#9957), and one of the zfs developers proposed a fix in this pull request: zfsonlinux/zfs#9967. However, the fix needs testing, so I thought to bring this up here in the hope that someone else will give this a shot as well.
  7. Like
    dkxls got a reaction from TRS-80 in ZFS on Helios4   
    I finally got ZFS installed on my Helios4 using Armbian 19.11.3 Buster with Linux 4.19.84, but I had to stick with ZFS version 0.7.12, as advise by @SvenHz in this post. Loading the ZFS 0.8 module would fail with the same error.
     
    @Steven Keuchel  Thanks again for your input on the kernel/ZFS memory settings. I had a more detailed look at your settings and noticed that you set `zfs_arc_max=805306368` (i.e. to 768 MB), which corresponds to the maximum `vmalloc` you seem to use. Or were those ZFS settings with `CONFIG_VMSPLIT_2G` and you use in fact an even higher value for `vmalloc`? Which ZFS version did you use, 0.6 or 0.7?
     
    As @qstaq stated, the default for `zfs_arc_max` is 50% of the available memory. There is however a pull request to increase this even further to 3/4 of all memory or all but 1GB, whichever is greater. This change is enabled due to the revised memory allocation for the ARC buffers in ZFS version 0.7.  In fact, this change makes me wonder if those settings for `vmalloc` and `zfs_arc_max` still make sense for ZFS >= 0.7. 
     
    The fundamental issue with the extensive use of the virtual address space in ZFS was outlined rather well by one of the developers in this ZOL issue. This was addressed however in version 0.7 with the introduction of the ARC Buffer Data (ABD), as also mentioned in the ZFS 0.7 release notes.
     
    It appears that the issues with the virtual address space are limited to ZFS versions 0.6 and older. Or is my understanding of this incorrect? Any advice or comment in this regard would be greatly appreciated.
  8. Like
    dkxls got a reaction from lanefu in ZFS on Helios4   
    I finally got ZFS installed on my Helios4 using Armbian 19.11.3 Buster with Linux 4.19.84, but I had to stick with ZFS version 0.7.12, as advise by @SvenHz in this post. Loading the ZFS 0.8 module would fail with the same error.
     
    @Steven Keuchel  Thanks again for your input on the kernel/ZFS memory settings. I had a more detailed look at your settings and noticed that you set `zfs_arc_max=805306368` (i.e. to 768 MB), which corresponds to the maximum `vmalloc` you seem to use. Or were those ZFS settings with `CONFIG_VMSPLIT_2G` and you use in fact an even higher value for `vmalloc`? Which ZFS version did you use, 0.6 or 0.7?
     
    As @qstaq stated, the default for `zfs_arc_max` is 50% of the available memory. There is however a pull request to increase this even further to 3/4 of all memory or all but 1GB, whichever is greater. This change is enabled due to the revised memory allocation for the ARC buffers in ZFS version 0.7.  In fact, this change makes me wonder if those settings for `vmalloc` and `zfs_arc_max` still make sense for ZFS >= 0.7. 
     
    The fundamental issue with the extensive use of the virtual address space in ZFS was outlined rather well by one of the developers in this ZOL issue. This was addressed however in version 0.7 with the introduction of the ARC Buffer Data (ABD), as also mentioned in the ZFS 0.7 release notes.
     
    It appears that the issues with the virtual address space are limited to ZFS versions 0.6 and older. Or is my understanding of this incorrect? Any advice or comment in this regard would be greatly appreciated.
  9. Like
    dkxls reacted to qstaq in ZFS on Helios4   
    Armbian kernel configs (the ones that I have looked at anyway) dont have the config options to build the ZFS modules. You will have to build a new kernel or build the module out of tree
     
    Also there is a lot of misunderstanding about ZFS ram use. The main culprit of the seemingly huge ram use of ZFS is the de-duplication feature. If you disable de-duplication then you can run a simple samba / nfs file server with 4-6TB of usable storage in about 650MB RAM. I have even run simple zfs pools on OPi Zero with 512MB RAM with de-dup disabled and a couple of config tweaks using a 1TB SSD over USB for storage, though I would suggest that 750MB RAM is a more sensible minimum
     
    The key is to disable de-dup BEFORE you write any data to the pools, disabling afterwards wont lower RAM use. You can also limit the RAM used by setting zfs_arc_max to 25% of system RAM instead of the default of 50% and disable vdev cache with zfs_vdev_cache_size = 0. I have also found that setting Armbian's ZRAM swap to 25-30% of RAM instead of 50% of RAM improves performance. Obviously performance isnt going to be quite as good as a machine with 8GB RAM but the difference isnt that big either
  10. Like
    dkxls got a reaction from gprovost in ZFS on Helios4   
    @gprovost Sure. I wasn't aware that you accept community contributions to the wiki.  I might take a stab at that once I have my own setup working ... If someone else, probably more experienced with ZFS, is up for that, please go ahead - I for one would certainly appreciate it.
  11. Like
    dkxls reacted to qstaq in ZFS on Helios4   
    There are license incompatibility problems and patent risks with ZFS on linux. ZFS is CDDL licenses which risks potential patents lawsuits issues for anyone publishing ZFS code contributions with a GPL license. Thats why ZFS will not be in any mainline kernel any time soon. The CDDL license was seemingly designed by Sun to be incompatible with the GPL license. That doesnt mean you cant use ZFS on linux, you just have to use CDDL licensed ZFS to code to be granted Oracles patent protection / immunity. Debian, Ubuntu, etc get round this by shipping the ZFS module as a DKMS built out of tree module and not as part of the kernel
     
    I would suggest that there are currently too many problems with ZFS, both from a practical and legal viewpoint, to think about making ZFS a core part of Armbian. Most of Armbian's target hardware is under specified for a default setup of ZFS and proper storage management under ZFS requires much more knowledge than mdraid and ext4 or BTRFS. Its not complex, it just requires you to think and plan more at the deployment stage and have an awareness of some storage concepts that most users dont have knowledge of
     
    Now on to the positive news  ZFS is an excellent filesystem and for certain use cases its way more powerful and flexible than BTRFS or any lvm / mdraid / ext4 combination, its also pretty simple to admin once you learn the basic concepts.  Im sure there are a small portion of Armbian users that would get significant benefit from having ZFS available as a storage option. Even better news is that fundamentally every Armbian system already support ZFS at a basic level as both Debian and Ubuntu have the zfs-dkms package already available in the repos. It takes less than 5 minutes to enable ZFS support on any Armbian image just by installing the following packages with apt: zfs-dkms, zfs-initramfs & zfsutils-linux (probably zfsnap is also wanted but not required).  The problem is that the default config will definitely not be suitable for a 1-2GB RAM SBC and we would need a more appropriate default config creating. I still wouldn't recommend ZFS use for the rootfs, if you want snapshots on your rootfs then use BTRFS or use ext4 with timeshift.
     
    @Igor @gprovost I would think that the best place to implement this would be as an extra option in armbian-config -> software. I wrote a simple script to install ZFS support and to set some default options for compression, arc, cache, etc for low mem / cpu devices. The same script works without changes on Debian Buster, Ubuntu 16.04, 18.04 & 19.04 as the process is identical on all OS variants as far as I can discern
     
    Is https://github.com/armbian/config the live location for the armbian-config utility? If so I will have a look at adding a basic ZFS option in software
     
     
  12. Like
    dkxls got a reaction from gprovost in ZFS on Helios4   
    @qstaq this sounds interesting. Would you be willing to share these scripts/settings either in public or private? I reckon this could be very useful, especially for beginners like me.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines