Jump to content

Recommended Posts

Posted

How to compile DKMS on the board?

 

I am trying this for the past week without success and starting to go crazy... The last kernel which worked is 6.11.2.

I tried with zfs-dkms (from apt's repositories and from manually build .deb from zfs git) and with aic8800 drivers.

 

I've tried Bookworm image, tried Noble. Tried -current and -edge kernels. Tried many kernel revisions. Tried to build kernels (image & headers) with compile.sh with docker, without docker. Updated my x86 Debian to Trixie.

 

Compilers are same on the build host and on the board:

longanpi-3h# dpkg -l | grep -i gcc
ii  gcc                                        4:13.2.0-7ubuntu1                              arm64        GNU C compiler
ii  gcc-13                                     13.3.0-6ubuntu2~24.04                          arm64        GNU C compiler
ii  gcc-13-aarch64-linux-gnu                   13.3.0-6ubuntu2~24.04                          arm64        GNU C compiler for the aarch64-linux-gnu architecture
ii  gcc-13-base:arm64                          13.3.0-6ubuntu2~24.04                          arm64        GCC, the GNU Compiler Collection (base package)
...

root@4585c06e2f54:/armbian# dpkg -l | grep -i gcc
ii  gcc                                   4:13.2.0-7ubuntu1                 amd64        GNU C compiler
ii  gcc-13                                13.3.0-6ubuntu2~24.04             amd64        GNU C compiler
ii  gcc-13-aarch64-linux-gnu              13.3.0-6ubuntu2~24.04cross1       amd64        GNU C compiler for the aarch64-linux-gnu architecture
ii  gcc-13-aarch64-linux-gnu-base:amd64   13.3.0-6ubuntu2~24.04cross1       amd64        GCC, the GNU Compiler Collection (base package)
...

 

Or even better: Is it possible somehow to build those drivers on the host PC? 16x x86 CPUs are much, much faster than 4x A53.

Posted

Even booting image built with compile.sh and trying to load module doesn't work:

root@longanpi-3h:~# modprobe aic_btusb_usb
modprobe: ERROR: could not insert 'aic_btusb_usb': Exec format error

root@longanpi-3h:~# dmesg | tail
[    7.172720] systemd[1]: Started systemd-rfkill.service - Load/Save RF Kill Switch Status.
[    7.390663] systemd[1]: Finished armbian-ramlog.service - Armbian memory supported logging.
[    7.437995] systemd[1]: Starting systemd-journald.service - Journal Service...
[    7.445699] systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
[    7.497556] systemd-journald[412]: Collecting audit messages is disabled.
[    7.588956] systemd[1]: Started systemd-journald.service - Journal Service.
[    7.646152] systemd-journald[412]: Received client request to flush runtime journal.
[    9.628132] EXT4-fs (mmcblk0p1): resizing filesystem from 446464 to 932864 blocks
[    9.725518] EXT4-fs (mmcblk0p1): resized filesystem to 932864
[   92.076404] module aic_btusb: .gnu.linkonce.this_module section size must match the kernel's built struct module size at run time

root@longanpi-3h:~# modinfo /lib/modules/6.12.30-current-sunxi64/updates/dkms/aic_btusb_usb.ko
filename:       /lib/modules/6.12.30-current-sunxi64/updates/dkms/aic_btusb_usb.ko
license:        GPL
version:        2.1.0
description:    AicSemi Bluetooth USB driver version
author:         AicSemi Corporation
import_ns:      VFS_internal_I_am_really_a_filesystem_and_am_NOT_a_driver
srcversion:     B6C3A1904D0AFEA27CE3E93
alias:          usb:vA69Cp88DCd*dc*dsc*dp*icE0isc01ip01in*
alias:          usb:vA69Cp8D81d*dc*dsc*dp*icE0isc01ip01in*
alias:          usb:vA69Cp8801d*dc*dsc*dp*icE0isc01ip01in*
depends:        
name:           aic_btusb
vermagic:       6.12.30-current-sunxi64 SMP mod_unload aarch64
parm:           btdual:int
parm:           bt_support:int
parm:           mp_drv_mode:0: NORMAL; 1: MP MODE (int)

 

Posted
3 hours ago, Johnny on the couch said:

and starting to go crazy

 

Welcome to the club :( 

 

3 hours ago, Johnny on the couch said:

I've tried Bookworm image, tried Noble. Tried -current and -edge kernels. Tried many kernel revisions. Tried to build kernels (image & headers) with compile.sh with docker, without docker. Updated my x86 Debian to Trixie.

 

Perhaps you rather help us maintaining and fixing what is possible, so I would guide you away from things that are complete waste of time. Such as this.

 

3 hours ago, Johnny on the couch said:

How to compile DKMS on the board?

 

DKMS works on Armbian, but if this (shit) driver works, that is another question. In some cases it takes years before driver become usable ... while performance still sucks. https://docs.armbian.com/WifiPerformance/#xradio-xr819

 

AFAIK, there are no reliable driver for aic8800. Sorry for bringing bad news ...

Posted

Hi,

 

I could try to help with maintenance, but first we should find where the problem is.

Even if we ignore aic8800 driver (I can confirm that when it is working it is indeed shiet and there are occasional USB disconnections of WLAN card), zfs-dkms still doesn't work.

Latest version where both drivers were working was 6.11.2. Currently compiling 6.11.9 to see if there DKMS drivers work.

I am suspecting that kernel headers for 6.12 and later kernels are to blame

Posted
58 minutes ago, Johnny on the couch said:

zfs-dkms still doesn't work.


This 100% works - we are even running daily automated tests on Bookworm, Jammy and Noble.
https://github.com/armbian/os/actions/runs/15362508470/job/43232345405

 

Manual test on Rockchip64 (Bananapi M7) with 6.12.28-current, latest kernel from daily repository.

System: https://paste.armbian.com/tiwisuhugi
 

Here is build log:

Spoiler
Setting up zfs-dkms (2.3.1-1~bpo12+1) ...
Loading new zfs-2.3.1 DKMS files...
Building for 6.12.31-current-rockchip64
Building initial module for 6.12.31-current-rockchip64
Done.

zfs.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/6.12.31-current-rockchip64/updates/dkms/

spl.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/6.12.31-current-rockchip64/updates/dkms/
depmod...
Setting up libzfs6linux:arm64 (2.3.1-1~bpo12+1) ...
Setting up zfsutils-linux (2.3.1-1~bpo12+1) ...
insmod /lib/modules/6.12.31-current-rockchip64/updates/dkms/spl.ko 
insmod /lib/modules/6.12.31-current-rockchip64/updates/dkms/zfs.ko 
Created symlink /etc/systemd/system/zfs-import.target.wants/zfs-import-cache.service → /lib/systemd/system/zfs-import-cache.service.
Created symlink /etc/systemd/system/zfs.target.wants/zfs-import.target → /lib/systemd/system/zfs-import.target.
Created symlink /etc/systemd/system/zfs-mount.service.wants/zfs-load-module.service → /lib/systemd/system/zfs-load-module.service.
Created symlink /etc/systemd/system/zfs.target.wants/zfs-load-module.service → /lib/systemd/system/zfs-load-module.service.
Created symlink /etc/systemd/system/zfs.target.wants/zfs-mount.service → /lib/systemd/system/zfs-mount.service.
Created symlink /etc/systemd/system/zfs.target.wants/zfs-share.service → /lib/systemd/system/zfs-share.service.
Created symlink /etc/systemd/system/zfs-volumes.target.wants/zfs-volume-wait.service → /lib/systemd/system/zfs-volume-wait.service.
Created symlink /etc/systemd/system/zfs.target.wants/zfs-volumes.target → /lib/systemd/system/zfs-volumes.target.
Created symlink /etc/systemd/system/multi-user.target.wants/zfs.target → /lib/systemd/system/zfs.target.
zfs-import-scan.service is a disabled or a static unit, not starting it.
Processing triggers for libc-bin (2.36-9+deb12u10) ...
Processing triggers for man-db (2.11.2-2) ...
Processing triggers for initramfs-tools (0.142+deb12u3) ...
update-initramfs: Generating /boot/initrd.img-6.12.31-current-rockchip64
W: Possible missing firmware /lib/firmware/rtl_nic/rtl8126a-3.fw for module r8169
W: Possible missing firmware /lib/firmware/rtl_nic/rtl8126a-2.fw for module r8169
update-initramfs: Armbian: Converting to u-boot format: /boot/uInitrd-6.12.31-current-rockchip64
Image Name:   uInitrd
Created:      Sat May 31 16:18:38 2025
Image Type:   AArch64 Linux RAMDisk Image (gzip compressed)
Data Size:    16727849 Bytes = 16335.79 KiB = 15.95 MiB
Load Address: 00000000
Entry Point:  00000000
update-initramfs: Armbian: Symlinking /boot/uInitrd-6.12.31-current-rockchip64 to /boot/uInitrd
'/boot/uInitrd' -> 'uInitrd-6.12.31-current-rockchip64'
update-initramfs: Armbian: done.
root@bananapim7:/home/igorp# modinfo zfs
filename:       /lib/modules/6.12.31-current-rockchip64/updates/dkms/zfs.ko
version:        2.3.1-1~bpo12+1
license:        CDDL
license:        Dual BSD/GPL
license:        Dual MIT/GPL
author:         OpenZFS
description:    ZFS
alias:          zzstd
alias:          zcommon
alias:          zunicode
alias:          znvpair
alias:          zlua
alias:          icp
alias:          zavl
alias:          devname:zfs
alias:          char-major-10-249
srcversion:     2742833EE1C14D857611F06
depends:        spl
name:           zfs
vermagic:       6.12.31-current-rockchip64 SMP preempt mod_unload aarch64
parm:           zvol_inhibit_dev:Do not create zvol device nodes (uint)
parm:           zvol_major:Major number for zvol device (uint)
parm:           zvol_threads:Number of threads to handle I/O requests. Setto 0 to use all active CPUs (uint)
parm:           zvol_request_sync:Synchronously handle bio requests (uint)
parm:           zvol_max_discard_blocks:Max number of blocks to discard (ulong)
parm:           zvol_num_taskqs:Number of zvol taskqs (uint)
parm:           zvol_prefetch_bytes:Prefetch N bytes at zvol start+end (uint)
parm:           zvol_volmode:Default volmode property value (uint)
parm:           zvol_blk_mq_queue_depth:Default blk-mq queue depth (uint)
parm:           zvol_use_blk_mq:Use the blk-mq API for zvols (uint)
parm:           zvol_blk_mq_blocks_per_thread:Process volblocksize blocks per thread (uint)
parm:           zvol_open_timeout_ms:Timeout for ZVOL open retries (uint)
parm:           zfs_xattr_compat:Use legacy ZFS xattr naming for writing new user namespace xattrs
parm:           zfs_fallocate_reserve_percent:Percentage of length to use for the available capacity check (uint)
parm:           zfs_key_max_salt_uses:Max number of times a salt value can be used for generating encryption keys before it is rotated (ulong)
parm:           zfs_object_mutex_size:Size of znode hold array (uint)
parm:           zfs_unlink_suspend_progress:Set to prevent async unlinks (debug - leaks space into the unlinked set) (int)
parm:           zfs_delete_blocks:Delete files larger than N blocks async (ulong)
parm:           zfs_dbgmsg_enable:Enable ZFS debug message log (int)
parm:           zfs_dbgmsg_maxsize:Maximum ZFS debug log size (uint)
parm:           zfs_admin_snapshot:Enable mkdir/rmdir/mv in .zfs/snapshot (int)
parm:           zfs_expire_snapshot:Seconds to expire .zfs/snapshot (int)
parm:           zfs_snapshot_no_setuid:Disable setuid/setgid for automounts in .zfs/snapshot (int)
parm:           zfs_vdev_scheduler:I/O scheduler
parm:           zfs_vdev_open_timeout_ms:Timeout before determining that a device is missing
parm:           zfs_vdev_failfast_mask:Defines failfast mask: 1 - device, 2 - transport, 4 - driver
parm:           zfs_vdev_disk_max_segs:Maximum number of data segments to add to an IO request (min 4)
parm:           zfs_vdev_disk_classic:Use classic BIO submission method
parm:           zfs_arc_shrinker_limit:Limit on number of pages that ARC shrinker can reclaim at once
parm:           zfs_arc_shrinker_seeks:Relative cost of ARC eviction vs other kernel subsystems
parm:           zfs_abd_scatter_enabled:Toggle whether ABD allocations must be linear. (int)
parm:           zfs_abd_scatter_min_size:Minimum size of scatter allocations. (int)
parm:           zfs_abd_scatter_max_order:Maximum order allocation used for a scatter ABD. (uint)
parm:           zio_slow_io_ms:Max I/O completion time (milliseconds) before marking it as slow
parm:           zio_requeue_io_start_cut_in_line:Prioritize requeued I/O
parm:           zfs_sync_pass_deferred_free:Defer frees starting in this pass
parm:           zfs_sync_pass_dont_compress:Don't compress starting in this pass
parm:           zfs_sync_pass_rewrite:Rewrite new bps starting in this pass
parm:           zio_dva_throttle_enabled:Throttle block allocations in the ZIO pipeline
parm:           zio_deadman_log_all:Log all slow ZIOs, not just those with vdevs
parm:           zfs_commit_timeout_pct:ZIL block open timeout percentage
parm:           zil_replay_disable:Disable intent logging replay
parm:           zil_nocacheflush:Disable ZIL cache flushes
parm:           zil_slog_bulk:Limit in bytes slog sync writes per commit
parm:           zil_maxblocksize:Limit in bytes of ZIL log block size
parm:           zil_maxcopied:Limit in bytes WR_COPIED size
parm:           zfs_vnops_read_chunk_size:Bytes to read per chunk
parm:           zfs_bclone_enabled:Enable block cloning
parm:           zfs_bclone_wait_dirty:Wait for dirty blocks when cloning
parm:           zfs_dio_enabled:Enable Direct I/O
parm:           zfs_zil_saxattr:Disable xattr=sa extended attribute logging in ZIL by settng 0.
parm:           zfs_immediate_write_sz:Largest data block to write to zil
parm:           zfs_max_nvlist_src_size:Maximum size in bytes allowed for src nvlist passed with ZFS ioctls
parm:           zfs_history_output_max:Maximum size in bytes of ZFS ioctl output that will be logged
parm:           zfs_zevent_retain_max:Maximum recent zevents records to retain for duplicate checking
parm:           zfs_zevent_retain_expire_secs:Expiration time for recent zevents records
parm:           zfs_lua_max_instrlimit:Max instruction limit that can be specified for a channel program
parm:           zfs_lua_max_memlimit:Max memory limit that can be specified for a channel program
parm:           zap_micro_max_size:Maximum micro ZAP size before converting to a fat ZAP, in bytes (max 1M)
parm:           zap_iterate_prefetch:When iterating ZAP object, prefetch it
parm:           zap_shrink_enabled:Enable ZAP shrinking
parm:           zfs_trim_extent_bytes_max:Max size of TRIM commands, larger will be split
parm:           zfs_trim_extent_bytes_min:Min size of TRIM commands, smaller will be skipped
parm:           zfs_trim_metaslab_skip:Skip metaslabs which have never been initialized
parm:           zfs_trim_txg_batch:Min number of txgs to aggregate frees before issuing TRIM
parm:           zfs_trim_queue_limit:Max queued TRIMs outstanding per leaf vdev
parm:           zfs_removal_ignore_errors:Ignore hard IO errors when removing device
parm:           zfs_remove_max_segment:Largest contiguous segment to allocate when removing device
parm:           vdev_removal_max_span:Largest span of free chunks a remap segment can span
parm:           zfs_removal_suspend_progress:Pause device removal after this many bytes are copied (debug use only - causes removal to hang)
parm:           zfs_rebuild_max_segment:Max segment size in bytes of rebuild reads
parm:           zfs_rebuild_vdev_limit:Max bytes in flight per leaf vdev for sequential resilvers
parm:           zfs_rebuild_scrub_enabled:Automatically scrub after sequential resilver completes
parm:           raidz_expand_max_reflow_bytes:For testing, pause RAIDZ expansion after reflowing this many bytes
parm:           raidz_expand_max_copy_bytes:Max amount of concurrent i/o for RAIDZ expansion
parm:           raidz_io_aggregate_rows:For expanded RAIDZ, aggregate reads that have more rows than this
parm:           zfs_scrub_after_expand:For expanded RAIDZ, automatically start a pool scrub when expansion completes
parm:           zfs_vdev_aggregation_limit:Max vdev I/O aggregation size
parm:           zfs_vdev_aggregation_limit_non_rotating:Max vdev I/O aggregation size for non-rotating media
parm:           zfs_vdev_read_gap_limit:Aggregate read I/O over gap
parm:           zfs_vdev_write_gap_limit:Aggregate write I/O over gap
parm:           zfs_vdev_max_active:Maximum number of active I/Os per vdev
parm:           zfs_vdev_async_write_active_max_dirty_percent:Async write concurrency max threshold
parm:           zfs_vdev_async_write_active_min_dirty_percent:Async write concurrency min threshold
parm:           zfs_vdev_async_read_max_active:Max active async read I/Os per vdev
parm:           zfs_vdev_async_read_min_active:Min active async read I/Os per vdev
parm:           zfs_vdev_async_write_max_active:Max active async write I/Os per vdev
parm:           zfs_vdev_async_write_min_active:Min active async write I/Os per vdev
parm:           zfs_vdev_initializing_max_active:Max active initializing I/Os per vdev
parm:           zfs_vdev_initializing_min_active:Min active initializing I/Os per vdev
parm:           zfs_vdev_removal_max_active:Max active removal I/Os per vdev
parm:           zfs_vdev_removal_min_active:Min active removal I/Os per vdev
parm:           zfs_vdev_scrub_max_active:Max active scrub I/Os per vdev
parm:           zfs_vdev_scrub_min_active:Min active scrub I/Os per vdev
parm:           zfs_vdev_sync_read_max_active:Max active sync read I/Os per vdev
parm:           zfs_vdev_sync_read_min_active:Min active sync read I/Os per vdev
parm:           zfs_vdev_sync_write_max_active:Max active sync write I/Os per vdev
parm:           zfs_vdev_sync_write_min_active:Min active sync write I/Os per vdev
parm:           zfs_vdev_trim_max_active:Max active trim/discard I/Os per vdev
parm:           zfs_vdev_trim_min_active:Min active trim/discard I/Os per vdev
parm:           zfs_vdev_rebuild_max_active:Max active rebuild I/Os per vdev
parm:           zfs_vdev_rebuild_min_active:Min active rebuild I/Os per vdev
parm:           zfs_vdev_nia_credit:Number of non-interactive I/Os to allow in sequence
parm:           zfs_vdev_nia_delay:Number of non-interactive I/Os before _max_active
parm:           zfs_vdev_queue_depth_pct:Queue depth percentage for each top-level vdev
parm:           zfs_vdev_def_queue_depth:Default queue depth for each allocator
parm:           zfs_vdev_mirror_rotating_inc:Rotating media load increment for non-seeking I/Os
parm:           zfs_vdev_mirror_rotating_seek_inc:Rotating media load increment for seeking I/Os
parm:           zfs_vdev_mirror_rotating_seek_offset:Offset in bytes from the last I/O which triggers a reduced rotating media seek increment
parm:           zfs_vdev_mirror_non_rotating_inc:Non-rotating media load increment for non-seeking I/Os
parm:           zfs_vdev_mirror_non_rotating_seek_inc:Non-rotating media load increment for seeking I/Os
parm:           zfs_initialize_value:Value written during zpool initialize
parm:           zfs_initialize_chunk_size:Size in bytes of writes by zpool initialize
parm:           zfs_condense_indirect_vdevs_enable:Whether to attempt condensing indirect vdev mappings
parm:           zfs_condense_indirect_obsolete_pct:Minimum obsolete percent of bytes in the mapping to attempt condensing
parm:           zfs_condense_min_mapping_bytes:Don't bother condensing if the mapping uses less than this amount of memory
parm:           zfs_condense_max_obsolete_bytes:Minimum size obsolete spacemap to attempt condensing
parm:           zfs_condense_indirect_commit_entry_delay_ms:Used by tests to ensure certain actions happen in the middle of a condense. A maximum value of 1 should be sufficient.
parm:           zfs_reconstruct_indirect_combinations_max:Maximum number of combinations when reconstructing split segments
parm:           vdev_file_logical_ashift:Logical ashift for file-based devices
parm:           vdev_file_physical_ashift:Physical ashift for file-based devices
parm:           zfs_vdev_default_ms_count:Target number of metaslabs per top-level vdev
parm:           zfs_vdev_default_ms_shift:Default lower limit for metaslab size
parm:           zfs_vdev_max_ms_shift:Default upper limit for metaslab size
parm:           zfs_vdev_min_ms_count:Minimum number of metaslabs per top-level vdev
parm:           zfs_vdev_ms_count_limit:Practical upper limit of total metaslabs per top-level vdev
parm:           zfs_slow_io_events_per_second:Rate limit slow IO (delay) events to this many per second
parm:           zfs_deadman_events_per_second:Rate limit hung IO (deadman) events to this many per second
parm:           zfs_dio_write_verify_events_per_second:Rate Direct I/O write verify events to this many per second
parm:           zfs_vdev_direct_write_verify:Direct I/O writes will perform for checksum verification before commiting write
parm:           zfs_checksum_events_per_second:Rate limit checksum events to this many checksum errors per second (do not set below ZED threshold).
parm:           zfs_scan_ignore_errors:Ignore errors during resilver/scrub
parm:           vdev_validate_skip:Bypass vdev_validate()
parm:           zfs_nocacheflush:Disable cache flushes
parm:           zfs_embedded_slog_min_ms:Minimum number of metaslabs required to dedicate one for log blocks
parm:           zfs_vdev_min_auto_ashift:Minimum ashift used when creating new top-level vdevs
parm:           zfs_vdev_max_auto_ashift:Maximum ashift used when optimizing for logical -> physical sector size on new top-level vdevs
parm:           zfs_vdev_raidz_impl:RAIDZ implementation
parm:           zfs_txg_timeout:Max seconds worth of delta per txg
parm:           zfs_read_history:Historical statistics for the last N reads
parm:           zfs_read_history_hits:Include cache hits in read history
parm:           zfs_txg_history:Historical statistics for the last N txgs
parm:           zfs_multihost_history:Historical statistics for last N multihost writes
parm:           zfs_flags:Set additional debugging flags
parm:           zfs_recover:Set to attempt to recover from fatal errors
parm:           zfs_free_leak_on_eio:Set to ignore IO errors during free and permanently leak the space
parm:           zfs_deadman_checktime_ms:Dead I/O check interval in milliseconds
parm:           zfs_deadman_enabled:Enable deadman timer
parm:           spa_asize_inflation:SPA size estimate multiplication factor
parm:           zfs_ddt_data_is_special:Place DDT data into the special class
parm:           zfs_user_indirect_is_special:Place user data indirect blocks into the special class
parm:           zfs_deadman_failmode:Failmode for deadman timer
parm:           zfs_deadman_synctime_ms:Pool sync expiration time in milliseconds
parm:           zfs_deadman_ziotime_ms:IO expiration time in milliseconds
parm:           zfs_special_class_metadata_reserve_pct:Small file blocks in special vdevs depends on this much free space available
parm:           spa_slop_shift:Reserved free space in pool
parm:           spa_num_allocators:Number of allocators per spa
parm:           spa_cpus_per_allocator:Minimum number of CPUs per allocators
parm:           zfs_unflushed_max_mem_amt:Specific hard-limit in memory that ZFS allows to be used for unflushed changes
parm:           zfs_unflushed_max_mem_ppm:Percentage of the overall system memory that ZFS allows to be used for unflushed changes (value is calculated over 1000000 for finer granularity)
parm:           zfs_unflushed_log_block_max:Hard limit (upper-bound) in the size of the space map log in terms of blocks.
parm:           zfs_unflushed_log_block_min:Lower-bound limit for the maximum amount of blocks allowed in log spacemap (see zfs_unflushed_log_block_max)
parm:           zfs_unflushed_log_txg_max:Hard limit (upper-bound) in the size of the space map log in terms of dirty TXGs.
parm:           zfs_unflushed_log_block_pct:Tunable used to determine the number of blocks that can be used for the spacemap log, expressed as a percentage of the total number of metaslabs in the pool (e.g. 400 means the number of log blocks is capped at 4 times the number of metaslabs)
parm:           zfs_max_log_walking:The number of past TXGs that the flushing algorithm of the log spacemap feature uses to estimate incoming log blocks
parm:           zfs_keep_log_spacemaps_at_export:Prevent the log spacemaps from being flushed and destroyed during pool export/destroy
parm:           zfs_max_logsm_summary_length:Maximum number of rows allowed in the summary of the spacemap log
parm:           zfs_min_metaslabs_to_flush:Minimum number of metaslabs to flush per dirty TXG
parm:           spa_upgrade_errlog_limit:Limit the number of errors which will be upgraded to the new on-disk error log when enabling head_errlog
parm:           spa_config_path:SPA config file (/etc/zfs/zpool.cache)
parm:           zfs_autoimport_disable:Disable pool import at module load
parm:           zfs_spa_discard_memory_limit:Limit for memory used in prefetching the checkpoint space map done on each vdev while discarding the checkpoint
parm:           metaslab_preload_pct:Percentage of CPUs to run a metaslab preload taskq
parm:           spa_load_verify_shift:log2 fraction of arc that can be used by inflight I/Os when verifying pool during import
parm:           spa_load_verify_metadata:Set to traverse metadata on pool import
parm:           spa_load_verify_data:Set to traverse data on pool import
parm:           spa_load_print_vdev_tree:Print vdev tree to zfs_dbgmsg during pool import
parm:           zio_taskq_batch_pct:Percentage of CPUs to run an IO worker thread
parm:           zio_taskq_batch_tpq:Number of threads per IO worker taskqueue
parm:           zfs_max_missing_tvds:Allow importing pool with up to this number of missing top-level vdevs (in read-only mode)
parm:           zfs_livelist_condense_zthr_pause:Set the livelist condense zthr to pause
parm:           zfs_livelist_condense_sync_pause:Set the livelist condense synctask to pause
parm:           zfs_livelist_condense_sync_cancel:Whether livelist condensing was canceled in the synctask
parm:           zfs_livelist_condense_zthr_cancel:Whether livelist condensing was canceled in the zthr function
parm:           zfs_livelist_condense_new_alloc:Whether extra ALLOC blkptrs were added to a livelist entry while it was being condensed
parm:           zio_taskq_read:Configure IO queues for read IO
parm:           zio_taskq_write:Configure IO queues for write IO
parm:           zio_taskq_write_tpq:Number of CPUs per write issue taskq
parm:           zfs_multilist_num_sublists:Number of sublists used in each multilist
parm:           zfs_multihost_interval:Milliseconds between mmp writes to each leaf
parm:           zfs_multihost_fail_intervals:Max allowed period without a successful mmp write
parm:           zfs_multihost_import_intervals:Number of zfs_multihost_interval periods to wait for activity
parm:           metaslab_aliquot:Allocation granularity (a.k.a. stripe size)
parm:           metaslab_debug_load:Load all metaslabs when pool is first opened
parm:           metaslab_debug_unload:Prevent metaslabs from being unloaded
parm:           metaslab_preload_enabled:Preload potential metaslabs during reassessment
parm:           metaslab_preload_limit:Max number of metaslabs per group to preload
parm:           metaslab_unload_delay:Delay in txgs after metaslab was last used before unloading
parm:           metaslab_unload_delay_ms:Delay in milliseconds after metaslab was last used before unloading
parm:           zfs_mg_noalloc_threshold:Percentage of metaslab group size that should be free to make it eligible for allocation
parm:           zfs_mg_fragmentation_threshold:Percentage of metaslab group size that should be considered eligible for allocations unless all metaslab groups within the metaslab class have also crossed this threshold
parm:           metaslab_fragmentation_factor_enabled:Use the fragmentation metric to prefer less fragmented metaslabs
parm:           zfs_metaslab_fragmentation_threshold:Fragmentation for metaslab to allow allocation
parm:           metaslab_lba_weighting_enabled:Prefer metaslabs with lower LBAs
parm:           metaslab_bias_enabled:Enable metaslab group biasing
parm:           zfs_metaslab_segment_weight_enabled:Enable segment-based metaslab selection
parm:           zfs_metaslab_switch_threshold:Segment-based metaslab selection maximum buckets before switching
parm:           metaslab_force_ganging:Blocks larger than this size are sometimes forced to be gang blocks
parm:           metaslab_force_ganging_pct:Percentage of large blocks that will be forced to be gang blocks
parm:           metaslab_df_max_search:Max distance (bytes) to search forward before using size tree
parm:           metaslab_df_use_largest_segment:When looking in size tree, use largest segment instead of exact fit
parm:           zfs_metaslab_max_size_cache_sec:How long to trust the cached max chunk size of a metaslab
parm:           zfs_metaslab_mem_limit:Percentage of memory that can be used to store metaslab range trees
parm:           zfs_metaslab_try_hard_before_gang:Try hard to allocate before ganging
parm:           zfs_metaslab_find_max_tries:Normally only consider this many of the best metaslabs in each vdev
parm:           zfs_active_allocator:SPA active allocator
parm:           zfs_zevent_len_max:Max event queue length
parm:           zfs_scan_vdev_limit:Max bytes in flight per leaf vdev for scrubs and resilvers
parm:           zfs_scrub_min_time_ms:Min millisecs to scrub per txg
parm:           zfs_obsolete_min_time_ms:Min millisecs to obsolete per txg
parm:           zfs_free_min_time_ms:Min millisecs to free per txg
parm:           zfs_resilver_min_time_ms:Min millisecs to resilver per txg
parm:           zfs_scan_suspend_progress:Set to prevent scans from progressing
parm:           zfs_no_scrub_io:Set to disable scrub I/O
parm:           zfs_no_scrub_prefetch:Set to disable scrub prefetching
parm:           zfs_async_block_max_blocks:Max number of blocks freed in one txg
parm:           zfs_max_async_dedup_frees:Max number of dedup blocks freed in one txg
parm:           zfs_free_bpobj_enabled:Enable processing of the free_bpobj
parm:           zfs_scan_blkstats:Enable block statistics calculation during scrub
parm:           zfs_scan_mem_lim_fact:Fraction of RAM for scan hard limit
parm:           zfs_scan_issue_strategy:IO issuing strategy during scrubbing. 0 = default, 1 = LBA, 2 = size
parm:           zfs_scan_legacy:Scrub using legacy non-sequential method
parm:           zfs_scan_checkpoint_intval:Scan progress on-disk checkpointing interval
parm:           zfs_scan_max_ext_gap:Max gap in bytes between sequential scrub / resilver I/Os
parm:           zfs_scan_mem_lim_soft_fact:Fraction of hard limit used as soft limit
parm:           zfs_scan_strict_mem_lim:Tunable to attempt to reduce lock contention
parm:           zfs_scan_fill_weight:Tunable to adjust bias towards more filled segments during scans
parm:           zfs_scan_report_txgs:Tunable to report resilver performance over the last N txgs
parm:           zfs_resilver_disable_defer:Process all resilvers immediately
parm:           zfs_resilver_defer_percent:Issued IO percent complete after which resilvers are deferred
parm:           zfs_scrub_error_blocks_per_txg:Error blocks to be scrubbed in one txg
parm:           zfs_dirty_data_max_percent:Max percent of RAM allowed to be dirty
parm:           zfs_dirty_data_max_max_percent:zfs_dirty_data_max upper bound as % of RAM
parm:           zfs_delay_min_dirty_percent:Transaction delay threshold
parm:           zfs_dirty_data_max:Determines the dirty space limit
parm:           zfs_wrlog_data_max:The size limit of write-transaction zil log data
parm:           zfs_dirty_data_max_max:zfs_dirty_data_max upper bound in bytes
parm:           zfs_dirty_data_sync_percent:Dirty data txg sync threshold as a percentage of zfs_dirty_data_max
parm:           zfs_delay_scale:How quickly delay approaches infinity
parm:           zfs_zil_clean_taskq_nthr_pct:Max percent of CPUs that are used per dp_sync_taskq
parm:           zfs_zil_clean_taskq_minalloc:Number of taskq entries that are pre-populated
parm:           zfs_zil_clean_taskq_maxalloc:Max number of taskq entries that are cached
parm:           zvol_enforce_quotas:Enable strict ZVOL quota enforcment
parm:           zfs_livelist_max_entries:Size to start the next sub-livelist in a livelist
parm:           zfs_livelist_min_percent_shared:Threshold at which livelist is disabled
parm:           zfs_max_recordsize:Max allowed record size
parm:           zfs_allow_redacted_dataset_mount:Allow mounting of redacted datasets
parm:           zfs_snapshot_history_enabled:Include snapshot events in pool history/events
parm:           zfs_disable_ivset_guid_check:Set to allow raw receives without IVset guids
parm:           zfs_default_bs:Default dnode block shift
parm:           zfs_default_ibs:Default dnode indirect block shift
parm:           zfs_prefetch_disable:Disable all ZFS prefetching
parm:           zfetch_max_streams:Max number of streams per zfetch
parm:           zfetch_min_sec_reap:Min time before stream reclaim
parm:           zfetch_max_sec_reap:Max time before stream delete
parm:           zfetch_min_distance:Min bytes to prefetch per stream
parm:           zfetch_max_distance:Max bytes to prefetch per stream
parm:           zfetch_max_idistance:Max bytes to prefetch indirects for per stream
parm:           zfetch_max_reorder:Max request reorder distance within a stream
parm:           zfetch_hole_shift:Max log2 fraction of holes in a stream
parm:           zfs_pd_bytes_max:Max number of bytes to prefetch
parm:           zfs_traverse_indirect_prefetch_limit:Traverse prefetch number of blocks pointed by indirect block
parm:           ignore_hole_birth:Alias for send_holes_without_birth_time (int)
parm:           send_holes_without_birth_time:Ignore hole_birth txg for zfs send
parm:           zfs_send_corrupt_data:Allow sending corrupt data
parm:           zfs_send_queue_length:Maximum send queue length
parm:           zfs_send_unmodified_spill_blocks:Send unmodified spill blocks
parm:           zfs_send_no_prefetch_queue_length:Maximum send queue length for non-prefetch queues
parm:           zfs_send_queue_ff:Send queue fill fraction
parm:           zfs_send_no_prefetch_queue_ff:Send queue fill fraction for non-prefetch queues
parm:           zfs_override_estimate_recordsize:Override block size estimate with fixed size
parm:           zfs_recv_queue_length:Maximum receive queue length
parm:           zfs_recv_queue_ff:Receive queue fill fraction
parm:           zfs_recv_write_batch_size:Maximum amount of writes to batch into one transaction
parm:           zfs_recv_best_effort_corrective:Ignore errors during corrective receive
parm:           dmu_object_alloc_chunk_shift:CPU-specific allocator grabs 2^N objects at once
parm:           zfs_nopwrite_enabled:Enable NOP writes
parm:           zfs_per_txg_dirty_frees_percent:Percentage of dirtied blocks from frees in one TXG
parm:           zfs_dmu_offset_next_sync:Enable forcing txg sync to find holes
parm:           dmu_prefetch_max:Limit one prefetch call to this size
parm:           dmu_ddt_copies:Override copies= for dedup objects
parm:           ddt_zap_default_bs:DDT ZAP leaf blockshift
parm:           ddt_zap_default_ibs:DDT ZAP indirect blockshift
parm:           zfs_dedup_log_txg_max:Max transactions before starting to flush dedup logs
parm:           zfs_dedup_log_mem_max:Max memory for dedup logs
parm:           zfs_dedup_log_mem_max_percent:Max memory for dedup logs, as % of total memory
parm:           zfs_dedup_prefetch:Enable prefetching dedup-ed blks
parm:           zfs_dedup_log_flush_passes_max:Max number of incremental dedup log flush passes per transaction
parm:           zfs_dedup_log_flush_min_time_ms:Min time to spend on incremental dedup log flush each transaction
parm:           zfs_dedup_log_flush_entries_min:Min number of log entries to flush each transaction
parm:           zfs_dedup_log_flush_flow_rate_txgs:Number of txgs to average flow rates across
parm:           zfs_dbuf_state_index:Calculate arc header index
parm:           dbuf_cache_max_bytes:Maximum size in bytes of the dbuf cache.
parm:           dbuf_cache_hiwater_pct:Percentage over dbuf_cache_max_bytes for direct dbuf eviction.
parm:           dbuf_cache_lowater_pct:Percentage below dbuf_cache_max_bytes when dbuf eviction stops.
parm:           dbuf_metadata_cache_max_bytes:Maximum size in bytes of dbuf metadata cache.
parm:           dbuf_cache_shift:Set size of dbuf cache to log2 fraction of arc size.
parm:           dbuf_metadata_cache_shift:Set size of dbuf metadata cache to log2 fraction of arc size.
parm:           dbuf_mutex_cache_shift:Set size of dbuf cache mutex array as log2 shift.
parm:           zfs_btree_verify_intensity:Enable btree verification. Levels above 4 require ZFS be built with debugging
parm:           brt_zap_prefetch:Enable prefetching of BRT ZAP entries
parm:           brt_zap_default_bs:BRT ZAP leaf blockshift
parm:           brt_zap_default_ibs:BRT ZAP indirect blockshift
parm:           zfs_arc_min:Minimum ARC size in bytes
parm:           zfs_arc_max:Maximum ARC size in bytes
parm:           zfs_arc_meta_balance:Balance between metadata and data on ghost hits.
parm:           zfs_arc_grow_retry:Seconds before growing ARC size
parm:           zfs_arc_shrink_shift:log2(fraction of ARC to reclaim)
parm:           zfs_arc_pc_percent:Percent of pagecache to reclaim ARC to
parm:           zfs_arc_average_blocksize:Target average block size
parm:           zfs_compressed_arc_enabled:Disable compressed ARC buffers
parm:           zfs_arc_min_prefetch_ms:Min life of prefetch block in ms
parm:           zfs_arc_min_prescient_prefetch_ms:Min life of prescient prefetched block in ms
parm:           l2arc_write_max:Max write bytes per interval
parm:           l2arc_write_boost:Extra write bytes during device warmup
parm:           l2arc_headroom:Number of max device writes to precache
parm:           l2arc_headroom_boost:Compressed l2arc_headroom multiplier
parm:           l2arc_trim_ahead:TRIM ahead L2ARC write size multiplier
parm:           l2arc_feed_secs:Seconds between L2ARC writing
parm:           l2arc_feed_min_ms:Min feed interval in milliseconds
parm:           l2arc_noprefetch:Skip caching prefetched buffers
parm:           l2arc_feed_again:Turbo L2ARC warmup
parm:           l2arc_norw:No reads during writes
parm:           l2arc_meta_percent:Percent of ARC size allowed for L2ARC-only headers
parm:           l2arc_rebuild_enabled:Rebuild the L2ARC when importing a pool
parm:           l2arc_rebuild_blocks_min_l2size:Min size in bytes to write rebuild log blocks in L2ARC
parm:           l2arc_mfuonly:Cache only MFU data from ARC into L2ARC
parm:           l2arc_exclude_special:Exclude dbufs on special vdevs from being cached to L2ARC if set.
parm:           zfs_arc_lotsfree_percent:System free memory I/O throttle in bytes
parm:           zfs_arc_sys_free:System free memory target size in bytes
parm:           zfs_arc_dnode_limit:Minimum bytes of dnodes in ARC
parm:           zfs_arc_dnode_limit_percent:Percent of ARC meta buffers for dnodes
parm:           zfs_arc_dnode_reduce_percent:Percentage of excess dnodes to try to unpin
parm:           zfs_arc_eviction_pct:When full, ARC allocation waits for eviction of this % of alloc size
parm:           zfs_arc_evict_batch_limit:The number of headers to evict per sublist before moving to the next
parm:           zfs_arc_prune_task_threads:Number of arc_prune threads
parm:           zstd_earlyabort_pass:Enable early abort attempts when using zstd
parm:           zstd_abort_size:Minimal size of block to attempt early abort
parm:           zfs_max_dataset_nesting:Limit to the amount of nesting a path can have. Defaults to 50.
parm:           zfs_fletcher_4_impl:Select fletcher 4 implementation.
parm:           zfs_sha512_impl:Select SHA512 implementation.
parm:           zfs_sha256_impl:Select SHA256 implementation.
parm:           icp_gcm_impl:Select gcm implementation.
parm:           zfs_blake3_impl:Select BLAKE3 implementation.
parm:           icp_aes_impl:Select aes implementation.

 

 

Posted

If I understood correctly, Github job is running on amd64 platform, your test is on rockchip kernel. I am talking about sunxi kernel. Latest sunxi kernel where zfs-dkms worked (on Allwinner H618) was 6.11.2-edge.

 

My RK356x board is on the way, but it would be nice to make this small LonganPi-3H with H618 usable

Posted
28 minutes ago, Johnny on the couch said:

Github job is running on amd64 platform,

 

30-40% of GH runners we use are on arm64 platform. Compilation work in all combinations.

 

28 minutes ago, Johnny on the couch said:

your test is on rockchip kernel.


Yes. This was lying around. Allwinner should work the same, but *edge kernels are experimental, adjust expectations.

 

28 minutes ago, Johnny on the couch said:

but it would be nice to make this small LonganPi-3H with H618 usable

 

You have all tools you need, but nobody will be debugging some old experimental kernel just for you. Hint: when you generate image from sources, enable kernel headers install (i think parameter is INSTALL_HEADER=yes), so you have the one that are matching your kernel. In repository, they are probably different / not compatible.

 

I checked if it works for latest stable (the only target that is worth spending time) kernel on supported hardware (that was around my desk). Unsupported hardware running unsupported kernel - it is expected that things will be failing.

Posted

Hi,

I was not expecting someone to debug some old experimental kernel just for me. I've trying to report bug, give back something to the community.

It is not build problem, so Github says use forums for that.

I've spent more than a week to have latest and greatest experimental kernel (6.14.5, 6.14.8) or not so experimental (6.12.9, 6.12.23, 6.12.30) with DKMS. Did many rebuilds, reinstalls and reflashes.

I was using INSTALL_HEADERS=yes in CLI args or config. DKMS drivers (zfs and aic8800) does not work.

 

I am suspecting that problem is in sunxi headers like reported month ago in github issue. There, the same DKMS driver works on same version on the kernel on RaspiOS where it was working. When that person tried to do it on Armbian it doesn't work.

 

If I was skillful enough I would try to fix it by myself, but I am not. Therefore I am reporting this bug here.

Posted
5 hours ago, Johnny on the couch said:

give back something to the community

 

And I have ruled out that this is Armbian OS problem.

Posted
16 hours ago, Johnny on the couch said:

If I understood correctly, Github job is running on amd64 platform, your test is on rockchip kernel. I am talking about sunxi kernel. Latest sunxi kernel where zfs-dkms worked (on Allwinner H618) was 6.11.2-edge.

 

My RK356x board is on the way, but it would be nice to make this small LonganPi-3H with H618 usable

That RK356x would not be much faster than the H618 I think and "sunxi kernel" is 32-bit, I assume you use and mean 64-bit.

DKMS always needs attention (in my experience), so I am happy I made/have my computers/boards such that I do not need it. actually, this topic made me do: apt purge dkms -y, as last week dist-upgrade from Bookworm to Trixie/Testing also had some issue (only a warning, not fatal) regarding DKMS.

 

In the past I used it for some SDR hardware module, but for WiFi USB dongles I simply have a red line: If no mainline kernel integrated code, I do not use is. Also what Igor points to, that stick you have is just very low wireless performance. Use RJ45 or buy good WiFi module I would say.

zfs-dkms has no hardware/vendor origin, so that is less of a problem, but note that for cheap low end SBC hardware, Btrfs also works fine and is in the Linux kernel so no special care needed. Also works fine on old ARMv6 Raspberry Pi0/1 including Zstd on-the-fly compression etc.

 

You could create an 2-parttiton image (and U-Boot blob will be between partition table and 1st partition). Use Btrfs as rootfs and also install vanilla Debian kernel and GRUB EFI. Tag 1st as 0xEF00 (ESP) type. Then that image can run as Virtual Machine (hardware accelerated) on all ARM64 SoCs and also emulated on fast x86-64 PCs. Run armbian build inside that VM. I did that for 32-bit Allwinner as I have NanoPi-NEOs, don't have 64-bit Allwinner SoCs. So then easier experiementing and developing and you also have kernel log via virtual tty. Still make sure you have real serial cable/console for that H618.

Posted
2 hours ago, eselarm said:

H618 I think and "sunxi kernel" is 32-bit,


Its 64bit, but support quality (tl;dr;) of Allwinner is behind Rockchip.

Posted

DKMS is 'quite complicated' , in an attempt to understand all that 'cryptic' stuff, I went googling around

https://wiki.archlinux.org/title/Dynamic_Kernel_Module_Support

https://www.linuxjournal.com/article/6896

https://github.com/dell/dkms

https://wiki.gentoo.org/wiki/DKMS

https://www.collabora.com/news-and-blog/blog/2021/05/05/quick-hack-patching-kernel-module-using-dkms/

https://www.baeldung.com/linux/dynamic-kernel-module-support

https://nilrt-docs.ni.com/opkg/dkms_opkg.html

^ surprisingly I found this guide/tutorial from national instruments 'quite intuitive'

and I dug further into how to make a kernel module, well at least a 'hello world'

https://tldp.org/LDP/lkmpg/2.6/html/

https://tldp.org/LDP/lkmpg/2.6/lkmpg.pdf

The Linux Kernel Module Programming Guide
Peter Jay Salzman
Michael Burian
Ori Pomerantz
Copyright © 2001 Peter Jay Salzman

---

ok I actually tried building that 'hello world'  kernel module and *it works*, for practically 'ancient' 2001 instructions.

so it turns out that to compile a kernel module, you do not need to build it in the kernel source tree itself

and that is *without* DKMS, read that last 2 tldp guides: The Linux Kernel Module Programming Guide

you can try building and inserting the 'hello world' module into your kernel, no DKMS, whatever, after you build your module !

in short is it not necessary to build a kernel module within the kernel source tree itself, but that there are some procedures as spelled out

in that 2 tldp docs.

(but fast forward to today, this same instruction may not work if you are using secure boot, then a lot more complicated things like module signing gets involved, review that dkms link from dell)

-------

now back to DKMS , where does that fit in?

so it turns out that DKMS is a utility / tool / system / automation tool, to help you *rebuild the kernel module* - out of linux kernel source tree (i.e. as like the hello world module above), *without building the kernel from source itself* !

but that you need to ***rebuild the kernel module from source***(e.g. using DKMS), then all the other links above are guides that may be relevant

----

now add more complications / complexities, normally what you wanted is a *driver* , not simply a kernel module

the driver often has several parts - the kernel module itself (this is the 'easy' part, you need to build it - from source), and that does not mean having to build the kernel itself from source, but you need to build the *kernel *** module *** *.

after you build the kernel module successfully, say, then there are more blows and pitfall

these days wifi and many network hardware requires *firmware files* , these *firmware files* can consist of 'bin' (firmware binary) and configuration (some of them text files) some of these firmware stuff lives in /lib/firmware.

then that you need your kernel module, you can deem that the 'driver core' that  interface the OS and interface those firmware. those firmware do not necessary run on the (host) cpu (i.e. your pc) but instead in the wifi chip  itself.

this is the part that is *highly opaque*, there are so many wifi chips that are *undocumented*, the firmware is *undocumented* and if you do not have any source for your kernel module which interface the firmware to the os, you are out-of-luck.

-----

 

to summarise - normally, one cannot hope to take a binary kernel module install it in your current kernel and hope it 'just works'.

if that works, a lot of things such as module versions and various constraints imposed by the kernel matches that in the kernel module itself, i.e. that module is compiled specifically for that specific kernel itself !

DKMS do not solve this, DKMS only *helps you rebuild the (kernel) module *** from source *** *, (and install it optionally).

the idea is this, you have the *source* to your out of kernel source *kernel modules*, when you upgrade the kernel, e.g. such as an apt-upgrade etc, DKMS can be triggered to *rebuild the kernel module from source* (and install it) in the new kernel (binary) tree e.g. copy that into /lib/modules/{kernel version}/xxx

---

if the kernel module is part of the kernel source tree itself, it actually do not need DKMS.  But that if the errors occurs  after building that *kernel module* (i.e. driver) , then congrats - you found a 'bug' in the *kernel module (driver)*, and that is true even if it is out of kernel source as a DKMS build. i.e. the driver sources need to be patched to work in the new kernel.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines