Jump to content

ZFS on Helios64


grek

Recommended Posts

Hey,

I think a dedicated topic for ZFS on Helios 64 is needed. Many people want to have it :)

 

As we know, and many of us playing with dev builds for helios ,it is no easy to work with ZFS 

I wrote few scripts maybe someone of You it can help in someway. 

Thanks @jbergler and @ShadowDance , I used your idea for complete it.

The problem is with playing with OMV and ZFS plugin. then dependencies remove our latest version of ZFS. I tested. everything survived but ZFS is downgraded to latest version from repo 

for example:

 

root@helios64:~# zfs --version

zfs-0.8.4-2~bpo10+1

zfs-kmod-2.0.0-rc6

root@helios64:~# uname -a

Linux helios64 5.9.11-rockchip64 #trunk.1 SMP PREEMPT Thu Nov 26 01:32:45 CET 2020 aarch64 GNU/Linux

root@helios64:~#

 

I tested it with kernel 5.9.10 and with today clean install with 5.9.11

 

First we need to have docker installed ( armbian-config -> software -> softy -> docker )

 

Create dedicated directory with Dockerfile ( we will customize ubuntu:bionic image with required libraries and gcc 10 . in next builds we can skip this step and just customize and run build-zfs.sh script) 

mkdir zfs-builder

cd zfs-builder

vi Dockerfile

 

FROM ubuntu:bionic
RUN apt update; apt install build-essential autoconf automake bison flex libtool gawk alien fakeroot dkms libblkid-dev uuid-dev libudev-dev libssl-dev zlib1g-dev libaio-dev libattr1-dev libelf-dev python3 python3-dev python3-setuptools python3-cffi libffi-dev -y; apt install software-properties-common -y; add-apt-repository ppa:ubuntu-toolchain-r/test; apt install gcc-10 g++-10 -y; update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-10 10; update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-10 10

 

Build docker image for building purposes.

 

docker build --tag zfs-build-ubuntu-bionic:0.1 .

 

Create script for ZFS build

 

vi build-zfs.sh

 

#!/bin/bash

#define zfs version
zfsver="zfs-2.0.0-rc6"
#creating building directory
mkdir /tmp/zfs-builds && cd "$_"
rm -rf /tmp/zfs-builds/inside_zfs.sh
apt-get download linux-headers-current-rockchip64
git clone -b $zfsver https://github.com/openzfs/zfs.git $zfsver-$(uname -r)

#create file to execute inside container
echo "creating file to execute it inside container"
cat > /tmp/zfs-builds/inside_zfs.sh  <<EOF
#!/bin/bash
cd scratch/
dpkg -i linux-headers-current-*.deb
zfsver="zfs-2.0.0-rc6"

cd "/scratch/$zfsver-$(uname -r)"
sh autogen.sh
./configure
make -s -j$(nproc)
make deb
mkdir "/scratch/deb-$zfsver-$(uname -r)"
cp *.deb "/scratch/deb-$zfsver-$(uname -r)"
rm -rf  "/scratch/$zfsver-$(uname -r)"
exit
EOF

chmod +x /tmp/zfs-builds/inside_zfs.sh

echo ""
echo "####################"
echo "starting container.."
echo "####################"
echo ""
docker run --rm -it -v /tmp/zfs-builds:/scratch zfs-build-ubuntu-bionic:0.1 /bin/bash /scratch/inside_zfs.sh  

# Cleanup packages (if installed).
modprobe -r zfs zunicode zzstd zlua zcommon znvpair zavl icp spl
apt remove --yes zfsutils-linux zfs-zed zfs-initramfs
apt autoremove --yes
dpkg -i "/tmp/zfs-builds/deb-$zfsver-$(uname -r)"/kmod-zfs-$(uname -r)*.deb
dpkg -i "/tmp/zfs-builds/deb-$zfsver-$(uname -r)"/{libnvpair1,libuutil1,libzfs2,libzpool2,python3-pyzfs,zfs}_*.deb 

echo ""
echo "###################"
echo "building complete"
echo "###################"
echo ""

 

chmod +x build-zfs.sh

screen -L -Logfile buildlog.txt ./build-zfs-final.sh

 

 

 

 

 

Link to comment
Share on other sites

Good initiative on this thread grek and nice to see a full set of instructions all the way from installing Docker :D!

 

I would also echo the zfs-dkms recommendation, when possible. But at least for Buster users we need a workaround. The zfs-dkms package can't be compiled (kernel is configured with features not supported by the old(er) gcc), also the zfs-dkms version in backports is 0.8.4, so unless patches have been backported from 0.8.5, kernel 5.8-5.9 isn't supported.

Link to comment
Share on other sites

45 minutes ago, FloBaoti said:

For information, I'm able to run ZFS compiling kmod-zfs (make deb-kmod) from 0.8.5 release with Docker method. Then I install zfsutils-linux  from buster-backports and it works well.

This is what I do as well :thumbup:, although I had to create a dummy zfs-dkms package to prevent it from being pulled in by zfsutils-linux.

Link to comment
Share on other sites

I've gotten ZFS working pretty well following the compile instructions on their website. I just put everything in a chroot to make it cleaner.

# Fist, install a few things we're going to need:
sudo apt install debootstrap

# Now, create our chroot
mkdir -p chroot-zfs

# This will take a while
sudo debootstrap --variant=buildd focal chroot-zfs

# mount proc, sys, and dev
sudo mount -t proc /proc chroot-zfs/proc
sudo mount --rbind /sys chroot-zfs/sys
sudo mount --rbind /dev chroot-zfs/dev

# Copy the files we need for apt. The sources.list is missing many things we'll need.
cp /etc/apt/sources.list .
cat /etc/apt/sources.list.d/armbian >> sources.list
sudo cp sources.list chroot-zfs/etc/sources.list

# Chroot in
chroot chroot-zfs /bin/bash

# Install wget and gnupg
apt install -y wget gnupg

# Add the repository key
wget -qO - http://apt.armbian.com/armbian.key | apt-key add -

# Update
apt update
apt upgrade -y

# From here, follow the instruction from ZFS

 

The only problem I ran into is the linux-headers-current-rockchip64 were for 5.9.10 and not 5.9.11. I did a cross build in a VM for Armbian to get updated headers. Once I installed those in the chroot I was able to build kmods.

 

I'm sure there's a way to include building ZFS 2.0 with the Armbian cross-compile setup, I just haven't figured out how yet.

Link to comment
Share on other sites

On 11/26/2020 at 12:00 PM, Gavin said:

I can't get zpool to automount on reboot, i.e the zfs-import service does not want to start on boot and I have to manually start it?

 

Yes, once you have everything installed, you need to enable zfs-import-cache, zsf-import.target, zfs-mount, zfs.target, zfs-zed.

Link to comment
Share on other sites

On 11/27/2020 at 4:51 PM, gprovost said:

IMHO zfs-dkms would the easiest approach for most people.

The problem here is that it‘s not possible to compile the module on Debian because of how the kernel has been built.

I reported the issue here and while I could *fix* it, it really strikes me as something the core armbian team needs to weigh in on.

One option is to use an older GCC in the build system, the other is to disable per task stack protections in the kernel - neither seem like great choices to me.

 

Link to comment
Share on other sites

4 hours ago, 0utc45t said:

I'm new here, joined to say that zfs support for Helios64 is very needed. Have tried to get it working, but failing miserably. Need help from more knowledgeable people, so I'll be following this thread.

 

In that case: please be aware, that the Armbian Ubuntu zfs-dkms package works well, but you mustn't upgrade to the most recent kernel release.

(The working base image would be Armbian_20.08.21_Helios64_focal_current_5.8.17.img.xz - the zfs-dkms package from the Armbian Ubuntu repository works out of the box.)

 

There appear to be compile issues with the current upgrade/release (20.11/20.11.1 with kernel 5.9.x).

Link to comment
Share on other sites

4 hours ago, SIGSEGV said:

With yesterday's release of ZFSon Linux 2.0, shouldn't all of this be easier?

 

That's a good point, and I suspect maybe (probably?) so.  However a quarterly release was just made, so it will probably be some time until this makes it into Armbian.  Maybe next quarterly release, maybe one after that (I have no idea, and please don't take that as any sort of promise, or ask "when" etc. because that's not how any of this works).

 

I realize that you guys have purchased a commercial product, and therefore may have certain expectations.  At the same time I sense that this may be some people's first introduction to how a true Free Software, community based project works.  So what I will say in this post applies to Armbian (the community project) itself.  You also have your support from your vendor (Kobol) who are working on things, in fact they work hand in hand with the Armbian project, and in general such partnerships are a Good Thing (for lots of reasons).  But I am part of Armbian project and so I can only speak for our part, not Kobol your vendor you bought the hardware from.

 

Now, gotten that out of the way, understand there is no big company, business model, or anything else behind Armbian.  It's just us, you and me, little guys like us all over the world.  You guys are doing great job in here helping each other out and figuring things out, together.  And this is why we have forums and everything is humming along nicely as it should so far.

 

Now the next step, for those who have an interest in this board, and in ZFS (and I detect a lot of interest in this thread), along with the requisite technical ability, would be to get involved in figuring how to get this into upstream Armbian, for the benefit of everyone.

 

For example, already in another thread a young man figured out the build steps, distilled that down into some bash script, and shared his results publicly.  That is based on current stuff, and should be a good stop-gap / workaround to point people to in the meantime.  And it's a great example of how F/LOSS development marches forward, one little step at a time, slowly but steadily, based on work of individuals.  Which is the only way any of this works.

 

Which leaves the question of new development going forward.  @SIGSEGV raises a good point about OpenZFS 2.0.  I cannot speak to specifics because I do not own this hardware and I am not a dev anyway so I cannot help out in that way.  However if any of you want to give it a try, by all means please do.  Make another forum post about getting that working, share your results with one another, test, test, and when it gets stable, I am sure it makes it into Armbian.  And probably faster than waiting for someone else to do it.

 

The "core devs" as mentioned further up thread should get around to it, but I can't say when.  There are only so many of them, remember all part time and mostly unpaid.[0]  I'm just trying to paint a realistic picture here of the resources we have to work with, which are scant (especially considering people's apparent expectations).  But also I want to make the point that you too, can become a "core dev" simply by sharing what you have learned, and contributing that back to the project in whatever form.  And every little step forward helps.  If you investigate, share your results.  Test, and share your results (good or bad!).  Many hands make light work, and thus the development (and maintenance) burden is shared and we all move forward together.

 

I hope this post was helpful in giving a broader picture of why it's against the (Armbian) rules[1] to ask "when" and similar questions.  In fact, feel free to point people towards this post when such questions come up.  Even by doing this, you are helping.  By hanging out around forums.  This is all I do, I am not a dev.  I also start contributing to docs lately, in fact I plan to re-work the relevant part of the docs touching on similar themes as this post.  I have a whole list of things I want to add or re-work in the docs.  We each do our own little part, as much as we are able (and perhaps more importantly, willing).

 

Cheers!  :beer:

 

[0] In case this comes up, yes Kobol have contributed financially to Armbian development and have been doing so for years.  They are a great company and partners and do things The Right Way.

[1] Those rules only apply to the Armbian forums overall, we take a much more hands off approach here in Kobol Club, as this is official support forum for some commercial products which are based on Armbian (i.e., rules are different here, and set by Kobol).

Edited by TRS-80
grammar
Link to comment
Share on other sites

13 minutes ago, snakekick said:

Hello,

for me with ubuntu it´s work great to add this ppa and all things are fine ;)

https://launchpad.net/~jonathonf/+archive/ubuntu/zfs?field.series_filter=focal

 

you need only install kernel headers with armbian config and than you can install zfs-dkms 

 

Thanks for sharing workaround, glad it's working for you, and welcome to forums.

 

However fixing upstream kernel/GCC issue is a much broader topic touching many different boards and thus makes it a much more complicated question when you look at it in terms of what the project supports overall.

 

So apparently my whole rah rah about "helping out" does not exactly apply in this particular case, as it really is an architectural decision, which need to be made very carefully at high level in project.  In other words, what @jbergler already mentioned (and in fact, he was original one filing bug report at GitHub, that I linked in this post).

 

Sorry if it took me a while to come full circle, some times I can be slow on the uptake.  :D

 

Everything I wrote above is still (generally) true however, so I leave it there for now.  In fact I could cite jbergler contribution as evidence of same...

Edited by TRS-80
slow on uptake, lol
Link to comment
Share on other sites

I agree with @TRS-80.

Some topics are better documented with a small How To post or even within the Wiki pages of the Kobol/Helios64 projects.

That being said - I will post how to get iSCSI targets published on Armbian - now that the new kernels has been released, hopefully it will be made sticky for others wanting to do the same.

 

Regarding the recent release of ZFSon Linux - let's try out and document our findings.

I will for sure test it out and come back here to document how it went.

Link to comment
Share on other sites

I added zfs zfs-dkms_0.8.5-2~20.04.york0_all to our repository so at least Ubuntu version works OOB (apt install zfs-dkms). I tested installation on Odroid N2+ running 5.9.11

* packaging mirrors need some time to receive updates so you might get some file-not-found, but it will work tomorrow. Headers must be installed from armbian-config.
 

Spoiler

root@odroidn2:~# modinfo zfs
filename:       /lib/modules/5.9.11-meson64/updates/dkms/zfs.ko
version:        0.8.5-2~20.04.york0
license:        CDDL
author:         OpenZFS on Linux
description:    ZFS
alias:          devname:zfs
alias:          char-major-10-249
srcversion:     5018CB71E1EDA9749200883
depends:        zlua,spl,znvpair,zcommon,icp,zunicode,zavl
name:           zfs
vermagic:       5.9.11-meson64 SMP preempt mod_unload aarch64
parm:           zvol_inhibit_dev:Do not create zvol device nodes (uint)
parm:           zvol_major:Major number for zvol device (uint)
parm:           zvol_threads:Max number of threads to handle I/O requests (uint)
parm:           zvol_request_sync:Synchronously handle bio requests (uint)
parm:           zvol_max_discard_blocks:Max number of blocks to discard (ulong)
parm:           zvol_prefetch_bytes:Prefetch N bytes at zvol start+end (uint)
parm:           zvol_volmode:Default volmode property value (uint)
parm:           zfs_key_max_salt_uses:Max number of times a salt value can be used for generating encryption keys before it is rotated (ulong)
parm:           zio_slow_io_ms:Max I/O completion time (milliseconds) before marking it as slow (int)
parm:           zio_requeue_io_start_cut_in_line:Prioritize requeued I/O (int)
parm:           zfs_sync_pass_deferred_free:Defer frees starting in this pass (int)
parm:           zfs_sync_pass_dont_compress:Don't compress starting in this pass (int)
parm:           zfs_sync_pass_rewrite:Rewrite new bps starting in this pass (int)
parm:           zio_dva_throttle_enabled:Throttle block allocations in the ZIO pipeline (int)
parm:           zio_deadman_log_all:Log all slow ZIOs, not just those with vdevs (int)
parm:           zfs_commit_timeout_pct:ZIL block open timeout percentage (int)
parm:           zil_replay_disable:Disable intent logging replay (int)
parm:           zil_nocacheflush:Disable ZIL cache flushes (int)
parm:           zil_slog_bulk:Limit in bytes slog sync writes per commit (ulong)
parm:           zil_maxblocksize:Limit in bytes of ZIL log block size (int)
parm:           zfs_object_mutex_size:Size of znode hold array (uint)
parm:           zfs_unlink_suspend_progress:Set to prevent async unlinks (debug - leaks space into the unlinked set) (int)
parm:           zfs_delete_blocks:Delete files larger than N blocks async (ulong)
parm:           zfs_read_chunk_size:Bytes to read per chunk (ulong)
parm:           zfs_immediate_write_sz:Largest data block to write to zil (long)
parm:           zfs_dbgmsg_enable:Enable ZFS debug message log (int)
parm:           zfs_dbgmsg_maxsize:Maximum ZFS debug log size (int)
parm:           zfs_admin_snapshot:Enable mkdir/rmdir/mv in .zfs/snapshot (int)
parm:           zfs_expire_snapshot:Seconds to expire .zfs/snapshot (int)
parm:           zfs_lua_max_instrlimit:Max instruction limit that can be specified for a channel program (ulong)
parm:           zfs_lua_max_memlimit:Max memory limit that can be specified for a channel program (ulong)
parm:           zap_iterate_prefetch:When iterating ZAP object, prefetch it (int)
parm:           zfs_trim_extent_bytes_max:Max size of TRIM commands, larger will be split (uint)
parm:           zfs_trim_extent_bytes_min:Min size of TRIM commands, smaller will be skipped (uint)
parm:           zfs_trim_metaslab_skip:Skip metaslabs which have never been initialized (uint)
parm:           zfs_trim_txg_batch:Min number of txgs to aggregate frees before issuing TRIM (uint)
parm:           zfs_trim_queue_limit:Max queued TRIMs outstanding per leaf vdev (uint)
parm:           zfs_removal_ignore_errors:Ignore hard IO errors when removing device (int)
parm:           zfs_remove_max_segment:Largest contiguous segment to allocate when removing device (int)
parm:           vdev_removal_max_span:Largest span of free chunks a remap segment can span (int)
parm:           zfs_removal_suspend_progress:Pause device removal after this many bytes are copied (debug use only - causes removal to hang) (int)
parm:           zfs_vdev_raidz_impl:Select raidz implementation.
parm:           zfs_vdev_aggregation_limit:Max vdev I/O aggregation size (int)
parm:           zfs_vdev_aggregation_limit_non_rotating:Max vdev I/O aggregation size for non-rotating media (int)
parm:           zfs_vdev_aggregate_trim:Allow TRIM I/O to be aggregated (int)
parm:           zfs_vdev_read_gap_limit:Aggregate read I/O over gap (int)
parm:           zfs_vdev_write_gap_limit:Aggregate write I/O over gap (int)
parm:           zfs_vdev_max_active:Maximum number of active I/Os per vdev (int)
parm:           zfs_vdev_async_write_active_max_dirty_percent:Async write concurrency max threshold (int)
parm:           zfs_vdev_async_write_active_min_dirty_percent:Async write concurrency min threshold (int)
parm:           zfs_vdev_async_read_max_active:Max active async read I/Os per vdev (int)
parm:           zfs_vdev_async_read_min_active:Min active async read I/Os per vdev (int)
parm:           zfs_vdev_async_write_max_active:Max active async write I/Os per vdev (int)
parm:           zfs_vdev_async_write_min_active:Min active async write I/Os per vdev (int)
parm:           zfs_vdev_initializing_max_active:Max active initializing I/Os per vdev (int)
parm:           zfs_vdev_initializing_min_active:Min active initializing I/Os per vdev (int)
parm:           zfs_vdev_removal_max_active:Max active removal I/Os per vdev (int)
parm:           zfs_vdev_removal_min_active:Min active removal I/Os per vdev (int)
parm:           zfs_vdev_scrub_max_active:Max active scrub I/Os per vdev (int)
parm:           zfs_vdev_scrub_min_active:Min active scrub I/Os per vdev (int)
parm:           zfs_vdev_sync_read_max_active:Max active sync read I/Os per vdev (int)
parm:           zfs_vdev_sync_read_min_active:Min active sync read I/Os per vdev (int)
parm:           zfs_vdev_sync_write_max_active:Max active sync write I/Os per vdev (int)
parm:           zfs_vdev_sync_write_min_active:Min active sync write I/Os per vdev (int)
parm:           zfs_vdev_trim_max_active:Max active trim/discard I/Os per vdev (int)
parm:           zfs_vdev_trim_min_active:Min active trim/discard I/Os per vdev (int)
parm:           zfs_vdev_queue_depth_pct:Queue depth percentage for each top-level vdev (int)
parm:           zfs_vdev_mirror_rotating_inc:Rotating media load increment for non-seeking I/O's (int)
parm:           zfs_vdev_mirror_rotating_seek_inc:Rotating media load increment for seeking I/O's (int)
parm:           zfs_vdev_mirror_rotating_seek_offset:Offset in bytes from the last I/O which triggers a reduced rotating media seek increment (int)
parm:           zfs_vdev_mirror_non_rotating_inc:Non-rotating media load increment for non-seeking I/O's (int)
parm:           zfs_vdev_mirror_non_rotating_seek_inc:Non-rotating media load increment for seeking I/O's (int)
parm:           zfs_initialize_value:Value written during zpool initialize (ulong)
parm:           zfs_condense_indirect_vdevs_enable:Whether to attempt condensing indirect vdev mappings (int)
parm:           zfs_condense_min_mapping_bytes:Minimum size of vdev mapping to condense (ulong)
parm:           zfs_condense_max_obsolete_bytes:Minimum size obsolete spacemap to attempt condensing (ulong)
parm:           zfs_condense_indirect_commit_entry_delay_ms:Delay while condensing vdev mapping (int)
parm:           zfs_reconstruct_indirect_combinations_max:Maximum number of combinations when reconstructing split segments (int)
parm:           zfs_vdev_scheduler:I/O scheduler
parm:           zfs_vdev_cache_max:Inflate reads small than max (int)
parm:           zfs_vdev_cache_size:Total size of the per-disk cache (int)
parm:           zfs_vdev_cache_bshift:Shift size to inflate reads too (int)
parm:           zfs_vdev_default_ms_count:Target number of metaslabs per top-level vdev (int)
parm:           zfs_vdev_min_ms_count:Minimum number of metaslabs per top-level vdev (int)
parm:           zfs_vdev_ms_count_limit:Practical upper limit of total metaslabs per top-level vdev (int)
parm:           zfs_slow_io_events_per_second:Rate limit slow IO (delay) events to this many per second (uint)
parm:           zfs_checksum_events_per_second:Rate limit checksum events to this many checksum errors per second (do not set below zedthreshold). (uint)
parm:           zfs_scan_ignore_errors:Ignore errors during resilver/scrub (int)
parm:           vdev_validate_skip:Bypass vdev_validate() (int)
parm:           zfs_nocacheflush:Disable cache flushes (int)
parm:           zfs_txg_timeout:Max seconds worth of delta per txg (int)
parm:           zfs_read_history:Historical statistics for the last N reads (int)
parm:           zfs_read_history_hits:Include cache hits in read history (int)
parm:           zfs_txg_history:Historical statistics for the last N txgs (int)
parm:           zfs_multihost_history:Historical statistics for last N multihost writes (int)
parm:           zfs_flags:Set additional debugging flags (uint)
parm:           zfs_recover:Set to attempt to recover from fatal errors (int)
parm:           zfs_free_leak_on_eio:Set to ignore IO errors during free and permanently leak the space (int)
parm:           zfs_deadman_synctime_ms:Pool sync expiration time in milliseconds
parm:           zfs_deadman_ziotime_ms:IO expiration time in milliseconds
parm:           zfs_deadman_checktime_ms:Dead I/O check interval in milliseconds (ulong)
parm:           zfs_deadman_enabled:Enable deadman timer (int)
parm:           zfs_deadman_failmode:Failmode for deadman timer
parm:           spa_asize_inflation:SPA size estimate multiplication factor (int)
parm:           spa_slop_shift:Reserved free space in pool
parm:           zfs_ddt_data_is_special:Place DDT data into the special class (int)
parm:           zfs_user_indirect_is_special:Place user data indirect blocks into the special class (int)
parm:           zfs_special_class_metadata_reserve_pct:Small file blocks in special vdevs depends on this much free space available (int)
parm:           spa_config_path:SPA config file (/etc/zfs/zpool.cache) (charp)
parm:           zfs_autoimport_disable:Disable pool import at module load (int)
parm:           zfs_spa_discard_memory_limit:Maximum memory for prefetching checkpoint space map per top-level vdev while discarding checkpoint (ulong)
parm:           spa_load_verify_shift:log2(fraction of arc that can be used by inflight I/Os when verifying pool during import (int)
parm:           spa_load_verify_metadata:Set to traverse metadata on pool import (int)
parm:           spa_load_verify_data:Set to traverse data on pool import (int)
parm:           spa_load_print_vdev_tree:Print vdev tree to zfs_dbgmsg during pool import (int)
parm:           zio_taskq_batch_pct:Percentage of CPUs to run an IO worker thread (uint)
parm:           zfs_max_missing_tvds:Allow importing pool with up to this number of missing top-level vdevs (in read-only mode) (ulong)
parm:           zfs_multilist_num_sublists:Number of sublists used in each multilist (int)
parm:           zfs_multihost_fail_intervals:Max allowed period without a successful mmp write (uint)
parm:           zfs_multihost_interval:Milliseconds between mmp writes to each leaf
parm:           zfs_multihost_import_intervals:Number of zfs_multihost_interval periods to wait for activity (uint)
parm:           metaslab_aliquot:allocation granularity (a.k.a. stripe size) (ulong)
parm:           metaslab_debug_load:load all metaslabs when pool is first opened (int)
parm:           metaslab_debug_unload:prevent metaslabs from being unloaded (int)
parm:           metaslab_preload_enabled:preload potential metaslabs during reassessment (int)
parm:           zfs_mg_noalloc_threshold:percentage of free space for metaslab group to allow allocation (int)
parm:           zfs_mg_fragmentation_threshold:fragmentation for metaslab group to allow allocation (int)
parm:           zfs_metaslab_fragmentation_threshold:fragmentation for metaslab to allow allocation (int)
parm:           metaslab_fragmentation_factor_enabled:use the fragmentation metric to prefer less fragmented metaslabs (int)
parm:           metaslab_lba_weighting_enabled:prefer metaslabs with lower LBAs (int)
parm:           metaslab_bias_enabled:enable metaslab group biasing (int)
parm:           zfs_metaslab_segment_weight_enabled:enable segment-based metaslab selection (int)
parm:           zfs_metaslab_switch_threshold:segment-based metaslab selection maximum buckets before switching (int)
parm:           metaslab_force_ganging:blocks larger than this size are forced to be gang blocks (ulong)
parm:           metaslab_df_max_search:max distance (bytes) to search forward before using size tree (int)
parm:           metaslab_df_use_largest_segment:when looking in size tree, use largest segment instead of exact fit (int)
parm:           zfs_zevent_len_max:Max event queue length (int)
parm:           zfs_zevent_cols:Max event column width (int)
parm:           zfs_zevent_console:Log events to the console (int)
parm:           zfs_scan_vdev_limit:Max bytes in flight per leaf vdev for scrubs and resilvers (ulong)
parm:           zfs_scrub_min_time_ms:Min millisecs to scrub per txg (int)
parm:           zfs_obsolete_min_time_ms:Min millisecs to obsolete per txg (int)
parm:           zfs_free_min_time_ms:Min millisecs to free per txg (int)
parm:           zfs_resilver_min_time_ms:Min millisecs to resilver per txg (int)
parm:           zfs_scan_suspend_progress:Set to prevent scans from progressing (int)
parm:           zfs_no_scrub_io:Set to disable scrub I/O (int)
parm:           zfs_no_scrub_prefetch:Set to disable scrub prefetching (int)
parm:           zfs_async_block_max_blocks:Max number of blocks freed in one txg (ulong)
parm:           zfs_free_bpobj_enabled:Enable processing of the free_bpobj (int)
parm:           zfs_scan_mem_lim_fact:Fraction of RAM for scan hard limit (int)
parm:           zfs_scan_issue_strategy:IO issuing strategy during scrubbing. 0 = default, 1 = LBA, 2 = size (int)
parm:           zfs_scan_legacy:Scrub using legacy non-sequential method (int)
parm:           zfs_scan_checkpoint_intval:Scan progress on-disk checkpointing interval (int)
parm:           zfs_scan_max_ext_gap:Max gap in bytes between sequential scrub / resilver I/Os (ulong)
parm:           zfs_scan_mem_lim_soft_fact:Fraction of hard limit used as soft limit (int)
parm:           zfs_scan_strict_mem_lim:Tunable to attempt to reduce lock contention (int)
parm:           zfs_scan_fill_weight:Tunable to adjust bias towards more filled segments during scans (int)
parm:           zfs_resilver_disable_defer:Process all resilvers immediately (int)
parm:           zfs_dirty_data_max_percent:percent of ram can be dirty (int)
parm:           zfs_dirty_data_max_max_percent:zfs_dirty_data_max upper bound as % of RAM (int)
parm:           zfs_delay_min_dirty_percent:transaction delay threshold (int)
parm:           zfs_dirty_data_max:determines the dirty space limit (ulong)
parm:           zfs_dirty_data_max_max:zfs_dirty_data_max upper bound in bytes (ulong)
parm:           zfs_dirty_data_sync_percent:dirty data txg sync threshold as a percentage of zfs_dirty_data_max (int)
parm:           zfs_delay_scale:how quickly delay approaches infinity (ulong)
parm:           zfs_sync_taskq_batch_pct:max percent of CPUs that are used to sync dirty data (int)
parm:           zfs_zil_clean_taskq_nthr_pct:max percent of CPUs that are used per dp_sync_taskq (int)
parm:           zfs_zil_clean_taskq_minalloc:number of taskq entries that are pre-populated (int)
parm:           zfs_zil_clean_taskq_maxalloc:max number of taskq entries that are cached (int)
parm:           zfs_disable_ivset_guid_check:Set to allow raw receives without IVset guids (int)
parm:           zfs_max_recordsize:Max allowed record size (int)
parm:           zfs_prefetch_disable:Disable all ZFS prefetching (int)
parm:           zfetch_max_streams:Max number of streams per zfetch (uint)
parm:           zfetch_min_sec_reap:Min time before stream reclaim (uint)
parm:           zfetch_max_distance:Max bytes to prefetch per stream (default 8MB) (uint)
parm:           zfetch_array_rd_sz:Number of bytes in a array_read (ulong)
parm:           zfs_pd_bytes_max:Max number of bytes to prefetch (int)
parm:           ignore_hole_birth:Alias for send_holes_without_birth_time (int)
parm:           send_holes_without_birth_time:Ignore hole_birth txg for zfs send (int)
parm:           zfs_override_estimate_recordsize:Record size calculation override for zfs send estimates (ulong)
parm:           zfs_send_corrupt_data:Allow sending corrupt data (int)
parm:           zfs_send_queue_length:Maximum send queue length (int)
parm:           zfs_send_unmodified_spill_blocks:Send unmodified spill blocks (int)
parm:           zfs_recv_queue_length:Maximum receive queue length (int)
parm:           dmu_object_alloc_chunk_shift:CPU-specific allocator grabs 2^N objects at once (int)
parm:           zfs_nopwrite_enabled:Enable NOP writes (int)
parm:           zfs_per_txg_dirty_frees_percent:percentage of dirtied blocks from frees in one TXG (ulong)
parm:           zfs_dmu_offset_next_sync:Enable forcing txg sync to find holes (int)
parm:           dmu_prefetch_max:Limit one prefetch call to this size (int)
parm:           zfs_dedup_prefetch:Enable prefetching dedup-ed blks (int)
parm:           zfs_dbuf_state_index:Calculate arc header index (int)
parm:           dbuf_cache_max_bytes:Maximum size in bytes of the dbuf cache. (ulong)
parm:           dbuf_cache_hiwater_pct:Percentage over dbuf_cache_max_bytes when dbufs must be evicted directly. (uint)
parm:           dbuf_cache_lowater_pct:Percentage below dbuf_cache_max_bytes when the evict thread stops evicting dbufs. (uint)
parm:           dbuf_metadata_cache_max_bytes:Maximum size in bytes of the dbuf metadata cache. (ulong)
parm:           dbuf_metadata_cache_shift:int
parm:           dbuf_cache_shift:Set the size of the dbuf cache to a log2 fraction of arc size. (int)
parm:           zfs_arc_min:Min arc size
parm:           zfs_arc_max:Max arc size
parm:           zfs_arc_meta_limit:Meta limit for arc size
parm:           zfs_arc_meta_limit_percent:Percent of arc size for arc meta limit
parm:           zfs_arc_meta_min:Min arc metadata
parm:           zfs_arc_meta_prune:Meta objects to scan for prune (int)
parm:           zfs_arc_meta_adjust_restarts:Limit number of restarts in arc_adjust_meta (int)
parm:           zfs_arc_meta_strategy:Meta reclaim strategy (int)
parm:           zfs_arc_grow_retry:Seconds before growing arc size
parm:           zfs_arc_p_dampener_disable:disable arc_p adapt dampener (int)
parm:           zfs_arc_shrink_shift:log2(fraction of arc to reclaim)
parm:           zfs_arc_pc_percent:Percent of pagecache to reclaim arc to (uint)
parm:           zfs_arc_p_min_shift:arc_c shift to calc min/max arc_p
parm:           zfs_arc_average_blocksize:Target average block size (int)
parm:           zfs_compressed_arc_enabled:Disable compressed arc buffers (int)
parm:           zfs_arc_min_prefetch_ms:Min life of prefetch block in ms
parm:           zfs_arc_min_prescient_prefetch_ms:Min life of prescient prefetched block in ms (int)
parm:           l2arc_write_max:Max write bytes per interval (ulong)
parm:           l2arc_write_boost:Extra write bytes during device warmup (ulong)
parm:           l2arc_headroom:Number of max device writes to precache (ulong)
parm:           l2arc_headroom_boost:Compressed l2arc_headroom multiplier (ulong)
parm:           l2arc_feed_secs:Seconds between L2ARC writing (ulong)
parm:           l2arc_feed_min_ms:Min feed interval in milliseconds (ulong)
parm:           l2arc_noprefetch:Skip caching prefetched buffers (int)
parm:           l2arc_feed_again:Turbo L2ARC warmup (int)
parm:           l2arc_norw:No reads during writes (int)
parm:           zfs_arc_lotsfree_percent:System free memory I/O throttle in bytes
parm:           zfs_arc_sys_free:System free memory target size in bytes
parm:           zfs_arc_dnode_limit:Minimum bytes of dnodes in arc
parm:           zfs_arc_dnode_limit_percent:Percent of ARC meta buffers for dnodes (ulong)
parm:           zfs_arc_dnode_reduce_percent:Percentage of excess dnodes to try to unpin (ulong)
parm:           zfs_abd_scatter_enabled:Toggle whether ABD allocations must be linear. (int)
parm:           zfs_abd_scatter_min_size:Minimum size of scatter allocations. (int)
parm:           zfs_abd_scatter_max_order:Maximum order allocation used for a scatter ABD. (uint)

 

 

Link to comment
Share on other sites

I actually managed to get the zfs working, by following the post by  @grek in this very thread. I've been running some fio tests to see if everything is dandy. Seems to be. :-) 

root@helios64:/ironwolf # zpool --version
zfs-2.0.0-rc6
zfs-kmod-2.0.0-rc6
root@helios64:/ironwolf # zfs version
zfs-2.0.0-rc6
zfs-kmod-2.0.0-rc6
root@helios64:/ironwolf #

 

I've noticed that zfs 2.0.0 has been released, so that means that I'll be compiling it after the fio runs are done (they've been running for a day or two now...) . 

Link to comment
Share on other sites

I also have zfs-2.0-rc5 running. I tried updating to rc7 and had some trouble with the deb packages. It's probably fixed in the GA build.

 

I used this guide: https://openzfs.github.io/openzfs-docs/Developer Resources/Building ZFS.html - and then `make deb` gives you debs you can install with `dpkg -i`. I installed the shared libs, zfs-dkms, and zfsutils from here.

 

As for systemd, you have to play around with it to get it to start on boot. Usually I can `sudo systemctl unmask <zfs-thing>`. Another good double check is to check /lib/systemd/system and /etc/systemd/system on a normal debian-based system and compare what's different. Make sure zfs.target is specified under the multi-user target!

 

If you use sanoid/synoid, it's a bit of a pain. It has debian's zfsutils as a dependency. You're probably better off downloading this off of git and setting up the cron yourself. Otherwise, you have to modify your apt cache to remove zfsutils as a dependency for sanoid.

 

I'm using zstd compression and have no complaints so far.

Link to comment
Share on other sites

"Works for me".

 

I'll write a page how I did it at the helios64 reddit wiki, if you care for that, on the weekend.

I did have to manually insert the systemd-files as the build of those somehow didn't work as expected, but maybe I'll work that out as well once I condense my notes.

 

# zfs version
zfs-2.0.0-1
zfs-kmod-2.0.0-1
# uname -a
Linux helios 5.9.11-rockchip64 #20.11.1 SMP PREEMPT Fri Nov 27 21:59:08 CET 2020 aarch64 aarch64 aarch64 GNU/Linux
# lsb_release -d
Description:    Ubuntu 20.04.1 LTS

 

Link to comment
Share on other sites

@Demodude123

It might be worth a try this approach to load the module on boot

sudo sh -c "echo zfs >/etc/modules-load.d/zfs.conf"

 

@deucalion

@Demodude123

Were you able to make a deb package with the kmod?? The dmks is nice, but a kernel module to match the Armbian release would probably be better for end users.

@Igor what would your thoughts be on this matter? I'm aware that not all armbian supported hardware might benefit from it since not all are used for storage purposes, but those that are could have a new option for the filesystem.

Link to comment
Share on other sites

zfs-2.0.0.tar.gz
zfs-kmod-2.0.0-rc5.src.rpm
kmod-zfs-5.8.14-rockchip64_2.0.0-0_arm64.deb
kmod-zfs-devel_2.0.0-0_arm64.deb
kmod-zfs-devel-5.8.14-rockchip64_2.0.0-0_arm64.deb
zfs-dkms-2.0.0-rc5.src.rpm
zfs-dkms_2.0.0-0_arm64.deb
zfs-2.0.0-rc5.src.rpm
zfs_2.0.0-0_arm64.deb
libnvpair1_2.0.0-0_arm64.deb
libuutil1_2.0.0-0_arm64.deb
libzfs2_2.0.0-0_arm64.deb
libzpool2_2.0.0-0_arm64.deb
libzfs2-devel_2.0.0-0_arm64.deb
zfs-test_2.0.0-0_arm64.deb
zfs-dracut_2.0.0-0_arm64.deb
zfs-initramfs_2.0.0-0_arm64.deb
python3-pyzfs_2.0.0-0_arm64.deb

Here's what it built for me

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines