Jump to content

dkxls

Members
  • Posts

    8
  • Joined

  • Last visited

Posts posted by dkxls

  1. On 3/21/2021 at 2:39 AM, SvenHz said:

    Here is an update. TL;DR: happy to report that zfs 2.0.2 seems to work fine on Armbian Hirsute.

    Great! Thanks @SvenHz for sharing your experience!

     

     

    Quote

    First I tried both the current (Mar 2021) Buster and Focal images but failed at the 'apt install zfs-dkms' step due to "exec format error" and basically failing to produce a working kernel module for ZFS. I suspect this is caused by the older binutils on both images.

    Yes, it's very likely the issue I mentioned above with binutils, for the Debian Buster image I am actually sure as the stable repositories still have version 2.31 which is too old.

     

    The simple solution here is to use the packages from the Debian testing repository. This can be simply done by adding two files:

    1.  /etc/apt/sources.list.d/testing.list:
      deb http://httpredir.debian.org/debian testing main
      #deb-src http://httpredir.debian.org/debian testing main

       

    2.  /etc/apt/preferences.d/10_testing:
      Package: *
      Pin: release a=testing
      Pin-Priority: -10
      
      Package: binutils binutils-common libbinutils binutils-arm-linux-gnueabihf libctf0
      Pin: release a=testing
      Pin-Priority: 900

    After that a simple "apt update && apt upgrade" should do the trick. With these settings, apt will only prefer the listed binutils packages from testing and keep all other packages as they are.

  2. On 1/28/2021 at 2:08 AM, SvenHz said:

    Otherwise I'll stick with the out-of-tree module build that I had done last year.

    I just recently upgraded my Helios4 to Armbian 20.11.6 with kernel 5.9.14-mvebu. I encountered some minor hiccups, which turn out to be either a kernel or binutils bug, but the binutils developers fixed this in the 2.35 release which you can install from testing (see https://github.com/openzfs/zfs/issues/11444 for the details).

     

    I haven't tested this on a 5.10-kernel, nor have I made the switch to ZFS 2.0 yet. But according to @Igor this seems to work also just fine. @SvenHz, if you get this working on your Helios4, I would appreciate if you could report this here, just to keep track of any issues (or the absence thereof).

  3. On 2/11/2020 at 11:10 PM, Steven Keuchel said:

    The 768MB arc_max was for a vmalloc of 1024MB and I was using zfs 0.7.12 when I wrote that, but since then I have switched to 0.8 (with Thumb2 patches from github). I have looked at this again (now with 0.8), and it seems the situation has indeed changed. When looking at vmalloc of zfs (spl actually) via /proc/vmallocinfo it seems to hover around 100-150MB with about 220MB being the highest I've seen and also the total address space occupied by all allocations has always been under 300MB and usually far below. So it seems rather compact with only little fragmentation happening. So zfs appears to use less vmalloc than I assumed and more lowmem, so I will have to revise my settings, because they clearly don't make sense anymore.

    Thanks @Steven Keuchel for clarifying this. I am currently running zfs 0.7.12 with the default memory settings on my helios4 and haven't encountered any issues yet. 

     

    Regarding zfs 0.8, it would be good if you could post your experience with the patched version from zfsonlinux/zfs#9967 in that PR to get more feedback on the changes. We also still need zfs test results from other ARM 32-bit systems to establish a baseline before this can get merged.

  4. On 12/23/2019 at 8:19 AM, SvenHz said:

    If I manually use dkms to build the ZFS 0.8.2 from the sources left around by apt-get, I get a successful build, however "mobprobe zfs" gives me a Exec Format Error. dmesg shows:

    zlua: section 4 reloc 44 sym 'longjmp': unsupported interworking call (Thumb -> ARM)

     

    Does anyone have 0.8.2 succesfully running on Helios4?

     

    @SvenHz I made some progress on tracking down this issue with zfs 0.8 on the Helios4, more precisely James Robertson (@jsrlabs) identified and fixed the missing Thumb-2 instructions used by the Armbian kernel, see the discussion on the ZFS mailinglist.

     

    I went ahead and opened a bug report (zfsonlinux/zfs#9957), and one of the zfs developers proposed a fix in this pull request: zfsonlinux/zfs#9967. However, the fix needs testing, so I thought to bring this up here in the hope that someone else will give this a shot as well.

  5. I finally got ZFS installed on my Helios4 using Armbian 19.11.3 Buster with Linux 4.19.84, but I had to stick with ZFS version 0.7.12, as advise by @SvenHz in this post. Loading the ZFS 0.8 module would fail with the same error.

     

    @Steven Keuchel  Thanks again for your input on the kernel/ZFS memory settings. I had a more detailed look at your settings and noticed that you set `zfs_arc_max=805306368` (i.e. to 768 MB), which corresponds to the maximum `vmalloc` you seem to use. Or were those ZFS settings with `CONFIG_VMSPLIT_2G` and you use in fact an even higher value for `vmalloc`? Which ZFS version did you use, 0.6 or 0.7?

     

    As @qstaq stated, the default for `zfs_arc_max` is 50% of the available memory. There is however a pull request to increase this even further to 3/4 of all memory or all but 1GB, whichever is greater. This change is enabled due to the revised memory allocation for the ARC buffers in ZFS version 0.7.  In fact, this change makes me wonder if those settings for `vmalloc` and `zfs_arc_max` still make sense for ZFS >= 0.7. 

     

    The fundamental issue with the extensive use of the virtual address space in ZFS was outlined rather well by one of the developers in this ZOL issue. This was addressed however in version 0.7 with the introduction of the ARC Buffer Data (ABD), as also mentioned in the ZFS 0.7 release notes.

     

    It appears that the issues with the virtual address space are limited to ZFS versions 0.6 and older. Or is my understanding of this incorrect? Any advice or comment in this regard would be greatly appreciated.

  6. @gprovost Sure. I wasn't aware that you accept community contributions to the wiki.  I might take a stab at that once I have my own setup working ... If someone else, probably more experienced with ZFS, is up for that, please go ahead - I for one would certainly appreciate it.

  7. @Steven Keuchel sorry for my late reply and thanks a lot for your advice regarding the kernel memory configuration. Much appreciated!

    I will probably start with a readily available kernel and set the correct parameters as suggested, and will later on look into compiling my own kernel. Do you have any advice regarding the compilation, any pointer to documentation or even a config file would be greatly appreciated here. :)


    @SvenHz Thanks for sharing your experience with zfs 0.8.2! I will stick to the 0.7 series if at all possible then for now as well.


    I think it would be good to have some kind of a documentation/guidelines for setting up zfs on a Helios4, as currently all these information seem to be distributed and one needs to go through various forum threads to get this working.

    @gprovost Have you considered to put this to the Helios4 wiki, maybe marked as an advanced or experimental setup for people that are keen to tinker around with this kind of thing?

  8.  

    On 9/9/2019 at 7:29 PM, qstaq said:

    I wrote a simple script to install ZFS support and to set some default options for compression, arc, cache, etc for low mem / cpu devices. The same script works without changes on Debian Buster, Ubuntu 16.04, 18.04 & 19.04 as the process is identical on all OS variants as far as I can discern

     


    @qstaq this sounds interesting. Would you be willing to share these scripts/settings either in public or private? I reckon this could be very useful, especially for beginners like me.

×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines