Jump to content

ZFS, header and kernel mismatch?


zeyoner
Go to solution Solved by Igor,

Recommended Posts

Attempting to install ZFS. But it seems that the latest linux headers are linux-headers-5.10.0-31-arm64.

 

I am not able to install ZFS. Pretty sure I need to lower the kernel to match.

 

sudo modprobe zfs
modprobe: FATAL: Module zfs not found in directory /lib/modules/6.6.39-current-rockchip64

 

Installed linux-image-5.10.0.31-arm64. I also sudo apt-mark hold linux-image-5.10.0.31-arm64. But I do not know how to set the kernel to boot. Search tells me to use armbian-config. Had to install it and after running it the "Other" option is missing. There isn't any options to change the kernel.

 

Screenshot 2024-08-04 231704.png

Edited by zeyoner
Link to comment
Share on other sites

Armbian & Khadas are rewarding contributors

Posted (edited)


### lsusb:

Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 007 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 005 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

/:  Bus 08.Port 1: Dev 1, Class=root_hub, Driver=ohci-platform/1p, 12M
/:  Bus 07.Port 1: Dev 1, Class=root_hub, Driver=ehci-platform/1p, 480M
/:  Bus 06.Port 1: Dev 1, Class=root_hub, Driver=ohci-platform/1p, 12M
/:  Bus 05.Port 1: Dev 1, Class=root_hub, Driver=ehci-platform/1p, 480M
/:  Bus 04.Port 1: Dev 1, Class=root_hub, Driver=xhci-hcd/1p, 5000M
/:  Bus 03.Port 1: Dev 1, Class=root_hub, Driver=xhci-hcd/1p, 480M
/:  Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci-hcd/1p, 5000M
/:  Bus 01.Port 1: Dev 1, Class=root_hub, Driver=xhci-hcd/1p, 480M

### Group membership of *REDACTED* : *REDACTED* tty disk dialout sudo audio video plugdev games users systemd-journal input netdev ssh docker

### Userland:

PRETTY_NAME="Armbian 23.8.1 bullseye"
ARMBIAN_PRETTY_NAME="Armbian 23.8.1 bullseye"

### Installed packages:

rc  armbian-bsp-cli-rockpi-4c         23.8.1                         arm64        Armbian CLI BSP for board 'rockpi-4c' - transitional package
ii  armbian-bsp-cli-rockpi-4c-current 23.8.1                         arm64        Armbian CLI BSP for board 'rockpi-4c' branch 'current'
ii  armbian-config                    24.5.5                         all          Armbian configuration utility
ii  armbian-firmware                  24.5.6                         all          Armbian - Linux firmware
ii  armbian-plymouth-theme            24.5.5                         all          boot animation, logger and I/O multiplexer - Armbian theme
ii  hostapd                           3:2.10-6~armbian22.02.3+1      arm64        IEEE 802.11 AP and IEEE 802.1X/WPA/WPA2/EAP Authenticator
ii  htop                              3.1.0-0~armbian20.08.2+1       arm64        interactive processes viewer
ii  linux-base                        4.6                            all          Linux image base package
ii  linux-dtb-current-rockchip64      24.5.3                         arm64        Armbian Linux current DTBs in /boot/dtb-6.6.39-current-rockchip64
ii  linux-headers-5.10.0-31-arm64     5.10.221-1                     arm64        Header files for Linux 5.10.0-31-arm64
ii  linux-headers-5.10.0-31-common    5.10.221-1                     all          Common header files for Linux 5.10.0-31
ii  linux-headers-arm64               5.10.221-1                     arm64        Header files for Linux arm64 configuration (meta-package)
hi  linux-image-5.10.0-31-arm64       5.10.221-1                     arm64        Linux 5.10 for 64-bit ARMv8 machines (signed)
ii  linux-image-current-rockchip64    24.5.3                         arm64        Armbian Linux current kernel image 6.6.39-current-rockchip64
ii  linux-kbuild-5.10                 5.10.221-1                     arm64        Kbuild infrastructure for Linux 5.10
ii  linux-libc-dev:arm64              23.02.2                        arm64        Armbian Linux support headers for userspace development
ii  linux-u-boot-rockpi-4c-current    24.5.1                         arm64        Das U-Boot for rockpi-4c

### Loaded modules:

Module                  Size  Used by
dm_crypt               49152  1
trusted                12288  1 dm_crypt
dm_mod                126976  3 dm_crypt
xt_conntrack           12288  1
nft_chain_nat          12288  3
xt_MASQUERADE          16384  1
nf_nat                 45056  2 nft_chain_nat,xt_MASQUERADE
nf_conntrack_netlink    45056  0
nf_conntrack          118784  4 xt_conntrack,nf_nat,nf_conntrack_netlink,xt_MASQUERADE
nf_defrag_ipv6         20480  1 nf_conntrack
nf_defrag_ipv4         12288  1 nf_conntrack
xfrm_user              40960  1
xfrm_algo              12288  1 xfrm_user
xt_addrtype            12288  2
nft_compat             16384  4
nf_tables             225280  75 nft_compat,nft_chain_nat
nfnetlink              16384  4 nft_compat,nf_conntrack_netlink,nf_tables
br_netfilter           28672  0
bridge                241664  1 br_netfilter
lz4hc                  12288  0
lz4                    12288  0
zram                   32768  3
raid1                  45056  1
snd_soc_hdmi_codec     20480  1
snd_soc_simple_card    20480  0
brcmfmac_wcc           12288  0
md_mod                151552  2 raid1
snd_soc_audio_graph_card    16384  0
snd_soc_simple_card_utils    24576  2 snd_soc_audio_graph_card,snd_soc_simple_card
snd_soc_spdif_tx       12288  0
dw_hdmi_i2s_audio      12288  0
hci_uart              135168  0
panfrost               69632  0
dw_hdmi_cec            12288  0
gpu_sched              36864  1 panfrost
drm_shmem_helper       16384  1 panfrost
rk_crypto              28672  0
btqca                  20480  1 hci_uart
rng_core               16384  1 rk_crypto
snd_soc_rockchip_i2s    24576  4
btsdio                 16384  0
btrtl                  28672  1 hci_uart
btintel                40960  1 hci_uart
hantro_vpu            249856  0
rockchip_vdec          77824  0
snd_soc_es8316         36864  1
brcmfmac              368640  1 brcmfmac_wcc
btbcm                  20480  1 hci_uart
snd_soc_core          208896  7 snd_soc_spdif_tx,snd_soc_hdmi_codec,snd_soc_audio_graph_card,snd_soc_simple_card_utils,snd_soc_rockchip_i2s,snd_soc_simple_card,snd_soc_es8316
v4l2_vp9               20480  2 rockchip_vdec,hantro_vpu
rockchip_rga           20480  0
brcmutil               16384  1 brcmfmac
bluetooth             663552  7 btrtl,btqca,btsdio,btintel,hci_uart,btbcm
videobuf2_dma_contig    20480  2 rockchip_vdec,hantro_vpu
snd_compress           24576  1 snd_soc_core
v4l2_h264              16384  2 rockchip_vdec,hantro_vpu
snd_pcm_dmaengine      12288  1 snd_soc_core
v4l2_mem2mem           24576  3 rockchip_vdec,hantro_vpu,rockchip_rga
videobuf2_dma_sg       16384  1 rockchip_rga
cfg80211              802816  1 brcmfmac
realtek                32768  1
snd_pcm               106496  6 snd_soc_hdmi_codec,snd_compress,snd_soc_simple_card_utils,snd_soc_core,snd_soc_es8316,snd_pcm_dmaengine
videobuf2_memops       16384  2 videobuf2_dma_contig,videobuf2_dma_sg
videobuf2_v4l2         20480  4 rockchip_vdec,hantro_vpu,rockchip_rga,v4l2_mem2mem
sg                     28672  0
snd_timer              36864  1 snd_pcm
videodev              229376  5 rockchip_vdec,videobuf2_v4l2,hantro_vpu,rockchip_rga,v4l2_mem2mem
rfkill                 24576  5 bluetooth,cfg80211
snd                    77824  5 snd_soc_hdmi_codec,snd_timer,snd_compress,snd_soc_core,snd_pcm
videobuf2_common       49152  8 rockchip_vdec,videobuf2_dma_contig,videobuf2_v4l2,hantro_vpu,rockchip_rga,videobuf2_dma_sg,v4l2_mem2mem,videobuf2_memops
mc                     53248  6 rockchip_vdec,videodev,videobuf2_v4l2,hantro_vpu,videobuf2_common,v4l2_mem2mem
soundcore              12288  1 snd
dwmac_rk               28672  0
stmmac_platform        20480  1 dwmac_rk
stmmac                233472  3 stmmac_platform,dwmac_rk
pcs_xpcs               20480  1 stmmac
cpufreq_dt             16384  0
ip_tables              28672  0
x_tables               36864  5 xt_conntrack,nft_compat,xt_addrtype,ip_tables,xt_MASQUERADE
autofs4                40960  2

### nand-sata-install.log:



### Current system health:

Time      CPU n/a    load %cpu %sys %usr %nice %io %irq   Tcpu
23:45:28    ---      1.36   3%   1%   1%   0%   0%   0%  38.1 °C
23:45:29    ---      1.36  13%   5%   0%   0%   6%   0%  38.8 °C
23:45:29    ---      1.36   5%   4%   0%   0%   0%   0%  38.8 °C
23:45:29    ---      1.36   7%   5%   0%   0%   0%   0%  38.8 °C
23:45:30    ---      1.36   6%   5%   0%   0%   0%   0%  38.8 °C
23:45:30    ---      1.36   3%   1%   1%   0%   0%   0%  38.1 °C
23:45:31    ---      1.36  15%  14%   0%   0%   0%   0%  38.8 °C
23:45:31    ---      1.36  18%  12%   3%   0%   0%   1%  38.8 °C
23:45:31    ---      1.36  10%   2%   0%   0%   6%   0%  38.1 °C
23:45:32    ---      1.36  12%   6%   0%   0%   5%   0%  38.8 °C

### resolv.conf

-rw-r--r-- 1 root root 82 Aug  4 22:13 /etc/resolv.conf
# Generated by NetworkManager
search mynetworksettings.com
nameserver XXX.XXX.1.1

### Current sysinfo:

Linux 6.6.39-current-rockchip64 (*REDACTED*) 	08/04/2024 	_aarch64_	(6 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.08    0.00    1.74    0.47    0.00   96.70

Device             tps    kB_read/s    kB_wrtn/s    kB_dscd/s    kB_read    kB_wrtn    kB_dscd
mmcblk0           5.36        98.19       124.17         0.00     556727     704048          0
mmcblk0p1         5.33        97.59       124.17         0.00     553295     704048          0
mmcblk0boot0      0.03         0.13         0.00         0.00        760          0          0
mmcblk0boot1      0.03         0.13         0.00         0.00        760          0          0
sda              51.97     25080.47       364.52         0.00  142201730    2066762          0
sda1             51.94     25079.83       364.52         0.00  142198110    2066762          0
sdb              51.81         1.29     25441.61         0.00       7333  144249354          0
sdb1             51.79         0.87     25441.61         0.00       4929  144249354          0
md127             1.38         3.09       363.63         0.00      17520    2061704          0
zram0             0.03         0.63         0.00         0.00       3552          4          0
zram1             1.84         0.49         9.49         0.00       2768      53804          0
zram2             0.00         0.00         0.00         0.00          0          0          0
dm-0              1.31         1.31       362.87         0.00       7401    2057404          0

--procs-- -----------------------memory---------------------- ---swap-- -----io---- -system-- --------cpu--------
   r    b         swpd         free         buff        cache   si   so    bi    bo   in   cs  us  sy  id  wa  st
   1    0            0      2852696        87652       758476    0    0    17    84   72   79   1   2  97   0   0

               total        used        free      shared  buff/cache   available
Mem:           3.8Gi       249Mi       2.7Gi       6.0Mi       826Mi       3.4Gi
Swap:          1.9Gi          0B       1.9Gi

NAME       ALGORITHM DISKSIZE  DATA  COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram0 lzo-rle       1.9G    4K    74B   12K       6 [SWAP]
/dev/zram1 zstd           50M  1.6M 169.1K  748K       6 /var/log

 23:45:32 up  1:34,  2 users,  load average: 1.41, 1.34, 1.13

[   11.029453] systemd[1]: Mounting Kernel Trace File System...
[   11.036976] systemd[1]: Starting Restore / save the current clock...
[   11.043059] systemd[1]: Starting Set the console keyboard layout...
[   11.049490] systemd[1]: Starting Create list of static device nodes for the current kernel...
[   11.055273] systemd[1]: Starting Load Kernel Module configfs...
[   11.061097] systemd[1]: Starting Load Kernel Module drm...
[   11.067007] systemd[1]: Starting Load Kernel Module fuse...
[   11.074905] systemd[1]: Started Nameserver information manager.
[   11.076390] systemd[1]: Reached target Network (Pre).
[   11.078239] systemd[1]: Condition check resulted in Set Up Additional Binary Formats being skipped.
[   11.078532] systemd[1]: Condition check resulted in File System Check on Root Device being skipped.
[   11.085766] systemd[1]: Starting Load Kernel Modules...
[   11.092078] systemd[1]: Starting Remount Root and Kernel File Systems...
[   11.098340] systemd[1]: Starting Coldplug All udev Devices...
[   11.109477] systemd[1]: Mounted Huge Pages File System.
[   11.110442] systemd[1]: Mounted POSIX Message Queue File System.
[   11.111668] systemd[1]: Mounted Kernel Debug File System.
[   11.112610] systemd[1]: Mounted Kernel Trace File System.
[   11.115214] systemd[1]: Finished Restore / save the current clock.
[   11.118234] systemd[1]: Finished Create list of static device nodes for the current kernel.
[   11.120119] systemd[1]: modprobe@configfs.service: Succeeded.
[   11.121741] systemd[1]: Finished Load Kernel Module configfs.
[   11.123734] systemd[1]: modprobe@drm.service: Succeeded.
[   11.125235] systemd[1]: Finished Load Kernel Module drm.
[   11.127108] systemd[1]: modprobe@fuse.service: Succeeded.
[   11.128707] systemd[1]: Finished Load Kernel Module fuse.
[   11.133332] systemd[1]: Finished Load Kernel Modules.
[   11.176850] systemd[1]: Mounting FUSE Control File System...
[   11.185205] systemd[1]: Mounting Kernel Configuration File System...
[   11.192924] EXT4-fs (mmcblk0p1): re-mounted 58156628-a1ec-48f7-971d-8c646a89e424 r/w. Quota mode: none.
[   11.193739] systemd[1]: Starting Apply Kernel Variables...
[   11.206913] systemd[1]: Finished Remount Root and Kernel File Systems.
[   11.208080] systemd[1]: Mounted FUSE Control File System.
[   11.208832] systemd[1]: Mounted Kernel Configuration File System.
[   11.211070] systemd[1]: Condition check resulted in Rebuild Hardware Database being skipped.
[   11.211572] systemd[1]: Condition check resulted in Platform Persistent Storage Archival being skipped.
[   11.219188] systemd[1]: Starting Load/Save Random Seed...
[   11.228196] systemd[1]: Starting Create System Users...
[   11.252755] systemd[1]: Finished Apply Kernel Variables.
[   11.299605] systemd[1]: Finished Load/Save Random Seed.
[   11.302490] systemd[1]: Finished Create System Users.
[   11.303527] systemd[1]: Condition check resulted in First Boot Complete being skipped.
[   11.308998] systemd[1]: Starting Create Static Device Nodes in /dev...
[   11.377873] systemd[1]: Finished Set the console keyboard layout.
[   11.380851] systemd[1]: Finished Create Static Device Nodes in /dev.
[   11.381677] systemd[1]: Reached target Local File Systems (Pre).
[   11.412109] systemd[1]: Mounting /tmp...
[   11.421698] systemd[1]: Starting Rule-based Manager for Device Events and Files...
[   11.430419] systemd[1]: Mounted /tmp.
[   11.539254] systemd[1]: Started Rule-based Manager for Device Events and Files.
[   11.688082] systemd[1]: Finished Coldplug All udev Devices.
[   11.732929] systemd[1]: Starting Helper to synchronize boot up for ifupdown...
[   11.744740] systemd[1]: Starting Show Plymouth Boot Screen...
[   11.754511] systemd[1]: Starting Wait for udev To Complete Device Initialization...
[   11.767593] systemd[1]: Finished Helper to synchronize boot up for ifupdown.
[   11.818605] cpu cpu0: OPP table can't be empty
[   11.819767] systemd[1]: Started Show Plymouth Boot Screen.
[   11.820700] systemd[1]: Condition check resulted in Dispatch Password Requests to Console Directory Watch when bootsplash is active being skipped.
[   11.821404] systemd[1]: Started Forward Password Requests to Plymouth Directory Watch.
[   11.821630] systemd[1]: Reached target Paths.
[   11.913384] systemd[1]: Found device /dev/ttyS2.
[   11.930685] rk_gmac-dwmac fe300000.ethernet: IRQ eth_wake_irq not found
[   11.930708] rk_gmac-dwmac fe300000.ethernet: IRQ eth_lpi not found
[   11.930861] rk_gmac-dwmac fe300000.ethernet: PTP uses main clock
[   11.931113] rk_gmac-dwmac fe300000.ethernet: clock input or output? (input).
[   11.931128] rk_gmac-dwmac fe300000.ethernet: TX delay(0x28).
[   11.931141] rk_gmac-dwmac fe300000.ethernet: RX delay(0x11).
[   11.931159] rk_gmac-dwmac fe300000.ethernet: integrated PHY? (no).
[   11.931216] rk_gmac-dwmac fe300000.ethernet: clock input from PHY
[   11.947395] rk_gmac-dwmac fe300000.ethernet: init for RGMII
[   11.947901] rk_gmac-dwmac fe300000.ethernet: User ID: 0x10, Synopsys ID: 0x35
[   11.947926] rk_gmac-dwmac fe300000.ethernet: 	DWMAC1000
[   11.947938] rk_gmac-dwmac fe300000.ethernet: DMA HW capability register supported
[   11.947948] rk_gmac-dwmac fe300000.ethernet: RX Checksum Offload Engine supported
[   11.947957] rk_gmac-dwmac fe300000.ethernet: COE Type 2
[   11.947967] rk_gmac-dwmac fe300000.ethernet: TX Checksum insertion supported
[   11.947976] rk_gmac-dwmac fe300000.ethernet: Wake-Up On Lan supported
[   11.948112] rk_gmac-dwmac fe300000.ethernet: Normal descriptors
[   11.948124] rk_gmac-dwmac fe300000.ethernet: Ring mode enabled
[   11.948133] rk_gmac-dwmac fe300000.ethernet: Enable RX Mitigation via HW Watchdog Timer
[   11.956858] mc: Linux media interface: v0.10
[   12.006164] videodev: Linux video capture interface: v2.00
[   12.024337] sd 0:0:0:0: Attached scsi generic sg0 type 0
[   12.037495] sd 3:0:0:0: Attached scsi generic sg1 type 0
[   12.086013] cfg80211: Loading compiled-in X.509 certificates for regulatory database
[   12.088325] Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
[   12.089990] Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
[   12.106142] cfg80211: loaded regulatory.db is malformed or signature is missing/invalid
[   12.113244] rockchip-rga ff680000.rga: HW Version: 0x03.02
[   12.116359] Bluetooth: Core ver 2.22
[   12.119692] rockchip-rga ff680000.rga: Registered rockchip-rga as /dev/video0
[   12.124169] NET: Registered PF_BLUETOOTH protocol family
[   12.124200] Bluetooth: HCI device and connection manager initialized
[   12.124230] Bluetooth: HCI socket layer initialized
[   12.124246] Bluetooth: L2CAP socket layer initialized
[   12.124312] Bluetooth: SCO socket layer initialized
[   12.152862] RTL8211E Gigabit Ethernet stmmac-0:00: attached PHY driver (mii_bus:phy_addr=stmmac-0:00, irq=POLL)
[   12.152889] RTL8211E Gigabit Ethernet stmmac-0:01: attached PHY driver (mii_bus:phy_addr=stmmac-0:01, irq=POLL)
[   12.155279] rockchip_vdec: module is from the staging directory, the quality is unknown, you have been warned.
[   12.163118] brcmfmac: F1 signature read @0x18000000=0x15294345
[   12.165874] hantro-vpu ff650000.video-codec: Adding to iommu group 0
[   12.167496] hantro-vpu ff650000.video-codec: registered rockchip,rk3399-vpu-enc as /dev/video1
[   12.167575] brcmfmac: brcmf_fw_alloc_request: using brcm/brcmfmac43456-sdio for chip BCM4345/9
[   12.169043] brcmfmac mmc2:0001:1: Direct firmware load for brcm/brcmfmac43456-sdio.radxa,rockpi4c.bin failed with error -2
[   12.179711] hantro-vpu ff650000.video-codec: registered rockchip,rk3399-vpu-dec as /dev/video2
[   12.187906] usbcore: registered new interface driver brcmfmac
[   12.189377] rkvdec ff660000.video-codec: Adding to iommu group 1
[   12.326236] rk3288-crypto ff8b0000.crypto: will run requests pump with realtime priority
[   12.326355] rk3288-crypto ff8b0000.crypto: Register ecb(aes) as ecb-aes-rk
[   12.326471] rk3288-crypto ff8b0000.crypto: Register cbc(aes) as cbc-aes-rk
[   12.326515] rk3288-crypto ff8b0000.crypto: Register ecb(des) as ecb-des-rk
[   12.326556] rk3288-crypto ff8b0000.crypto: Register cbc(des) as cbc-des-rk
[   12.326597] rk3288-crypto ff8b0000.crypto: Register ecb(des3_ede) as ecb-des3-ede-rk
[   12.326638] rk3288-crypto ff8b0000.crypto: Register cbc(des3_ede) as cbc-des3-ede-rk
[   12.326679] rk3288-crypto ff8b0000.crypto: Register sha1 as rk-sha1
[   12.326722] rk3288-crypto ff8b0000.crypto: Register sha256 as rk-sha256
[   12.326765] rk3288-crypto ff8b0000.crypto: Register md5 as rk-md5
[   12.326806] rk3288-crypto ff8b0000.crypto: Register TRNG with sample=1200
[   12.372159] panfrost ff9a0000.gpu: clock rate = 500000000
[   12.375101] rk3288-crypto ff8b8000.crypto: will run requests pump with realtime priority
[   12.401373] Bluetooth: HCI UART driver ver 2.3
[   12.401406] Bluetooth: HCI UART protocol H4 registered
[   12.401417] Bluetooth: HCI UART protocol BCSP registered
[   12.404734] Bluetooth: HCI UART protocol LL registered
[   12.404763] Bluetooth: HCI UART protocol ATH3K registered
[   12.404867] Bluetooth: HCI UART protocol Three-wire (H5) registered
[   12.405167] Bluetooth: HCI UART protocol Intel registered
[   12.405545] panfrost ff9a0000.gpu: mali-t860 id 0x860 major 0x2 minor 0x0 status 0x0
[   12.405571] panfrost ff9a0000.gpu: features: 00000000,00000407, issues: 00000000,24040400
[   12.405590] panfrost ff9a0000.gpu: Features: L2:0x07120206 Shader:0x00000000 Tiler:0x00000809 Mem:0x1 MMU:0x00002830 AS:0xff JS:0x7
[   12.405612] panfrost ff9a0000.gpu: shader_present=0xf l2_present=0x1
[   12.406923] Bluetooth: HCI UART protocol Broadcom registered
[   12.407046] Bluetooth: HCI UART protocol QCA registered
[   12.407059] Bluetooth: HCI UART protocol AG6XX registered
[   12.407149] Bluetooth: HCI UART protocol Marvell registered
[   12.412904] [drm] Initialized panfrost 1.2.0 20180908 for ff9a0000.gpu on minor 1
[   12.523684] dw-apb-uart ff180000.serial: failed to request DMA
[   12.536248] asoc-audio-graph-card sound: ASoC: DAPM unknown pin Headphones
[   12.691413] asoc-audio-graph-card sound: ASoC: DAPM unknown pin Headphones
[   12.700558] brcmfmac: brcmf_c_process_txcap_blob: no txcap_blob available (err=-2)
[   12.701713] brcmfmac: brcmf_c_preinit_dcmds: Firmware: BCM4345/9 wl0: Jun 16 2017 12:38:26 version XXX.XXX.96.2 (66c4e21@sh-git) (r) FWID 01-1813af84
[   12.724993] input: Analog Headphones as /devices/platform/sound/sound/card0/input0
[   12.775738] block device autoloading is deprecated and will be removed.
[   12.800391] systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
[   12.801321] Bluetooth: hci0: BCM: chip id 130
[   12.802223] Bluetooth: hci0: BCM: features 0x0f
[   12.805097] Bluetooth: hci0: BCM4345C5
[   12.805136] Bluetooth: hci0: BCM4345C5 (003.006.006) build 0000
[   12.807564] Bluetooth: hci0: BCM4345C5 'brcm/BCM4345C5.hcd' Patch
[   12.864598] systemd[1]: Starting Load/Save RF Kill Switch Status...
[   13.008216] md/raid1:md127: not clean -- starting background reconstruction
[   13.008246] md/raid1:md127: active with 2 out of 2 mirrors
[   13.042626] systemd[1]: Started Load/Save RF Kill Switch Status.
[   13.086290] systemd[1]: Started Timer to wait for more drives before activating degraded array md127..
[   13.086513] systemd[1]: Reached target Bluetooth.
[   13.086647] systemd[1]: Reached target Sound Card.
[   13.186104] md127: detected capacity change from 0 to 1953257472
[   13.334598] systemd[1]: mdadm-last-resort@md127.timer: Succeeded.
[   13.334645] systemd[1]: Stopped Timer to wait for more drives before activating degraded array md127..
[   13.364253] systemd[1]: Started MD array monitor.
[   13.404853] systemd[1]: Finished Wait for udev To Complete Device Initialization.
[   13.718326] Bluetooth: hci0: BCM: features 0x0f
[   13.722257] Bluetooth: hci0: BCM4345C5 Ampak_CL1 UART 37.4 MHz BT 5.0 [Version: Version: 0039.0089]
[   13.722297] Bluetooth: hci0: BCM4345C5 (003.006.006) build 0089
[   18.015740] systemd[1]: systemd-rfkill.service: Succeeded.
[   23.008896] platform sound-dit: deferred probe pending
[  101.050295] systemd[1]: dev-disk-by-uuid-1efe5ff5-c91e-4181-beb3-67d382095a5e.device: Job dev-disk-by-uuid-1efe5ff5-c91e-4181-beb3-67d382095a5e.device/start timed out.
[  101.050338] systemd[1]: Timed out waiting for device /dev/disk/by-uuid/1efe5ff5-c91e-4181-beb3-67d382095a5e.
[  101.050538] systemd[1]: Dependency failed for Cryptography Setup for luks-1efe5ff5-c91e-4181-beb3-67d382095a5e.
[  101.050651] systemd[1]: Dependency failed for Local Encrypted Volumes.
[  101.050756] systemd[1]: cryptsetup.target: Job cryptsetup.target/start failed with result 'dependency'.
[  101.050823] systemd[1]: systemd-cryptsetup@luks-1efe5ff5-c91e-4181-beb3-67d382095a5e.service: Job systemd-cryptsetup@luks-1efe5ff5-c91e-4181-beb3-67d382095a5e.service/start failed with result 'dependency'.
[  101.050869] systemd[1]: dev-disk-by-uuid-1efe5ff5-c91e-4181-beb3-67d382095a5e.device: Job dev-disk-by-uuid-1efe5ff5-c91e-4181-beb3-67d382095a5e.device/start failed with result 'timeout'.
[  101.050954] systemd[1]: dev-disk-by-uuid-30e48ec2-bb9f-49e0-afd1-71f5dccd2435.device: Job dev-disk-by-uuid-30e48ec2-bb9f-49e0-afd1-71f5dccd2435.device/start timed out.
[  101.050974] systemd[1]: Timed out waiting for device /dev/disk/by-uuid/30e48ec2-bb9f-49e0-afd1-71f5dccd2435.
[  101.051138] systemd[1]: Dependency failed for Cryptography Setup for luks-30e48ec2-bb9f-49e0-afd1-71f5dccd2435.
[  101.051284] systemd[1]: systemd-cryptsetup@luks-30e48ec2-bb9f-49e0-afd1-71f5dccd2435.service: Job systemd-cryptsetup@luks-30e48ec2-bb9f-49e0-afd1-71f5dccd2435.service/start failed with result 'dependency'.
[  101.051396] systemd[1]: dev-disk-by-uuid-30e48ec2-bb9f-49e0-afd1-71f5dccd2435.device: Job dev-disk-by-uuid-30e48ec2-bb9f-49e0-afd1-71f5dccd2435.device/start failed with result 'timeout'.
[  101.051548] systemd[1]: Reached target Block Device Preparation for /dev/mapper/luks-1efe5ff5-c91e-4181-beb3-67d382095a5e.
[  101.052379] systemd[1]: Reached target Block Device Preparation for /dev/mapper/luks-30e48ec2-bb9f-49e0-afd1-71f5dccd2435.
[  101.088408] systemd[1]: Starting Install ZFS kernel module...
[  101.089415] systemd[1]: Stopped target Block Device Preparation for /dev/mapper/luks-1efe5ff5-c91e-4181-beb3-67d382095a5e.
[  101.089623] systemd[1]: Stopped target Block Device Preparation for /dev/mapper/luks-30e48ec2-bb9f-49e0-afd1-71f5dccd2435.
[  101.100061] systemd[1]: zfs-load-module.service: Main process exited, code=exited, status=1/FAILURE
[  101.100924] systemd[1]: zfs-load-module.service: Failed with result 'exit-code'.
[  101.102723] systemd[1]: Failed to start Install ZFS kernel module.
[  101.102944] systemd[1]: Dependency failed for Import ZFS pools by cache file.
[  101.103070] systemd[1]: zfs-import-cache.service: Job zfs-import-cache.service/start failed with result 'dependency'.
[  101.103618] systemd[1]: Reached target ZFS pool import target.
[  101.103825] systemd[1]: Condition check resulted in Mount ZFS filesystems being skipped.
[  101.103918] systemd[1]: Reached target Local File Systems.
[  101.109267] systemd[1]: Starting Armbian leds state...
[  101.115048] systemd[1]: Starting Armbian ZRAM config...
[  101.121155] systemd[1]: Starting Set console font and keymap...
[  101.127712] systemd[1]: Starting Raise network interfaces...
[  101.134102] systemd[1]: Starting Tell Plymouth To Write Out Runtime Data...
[  101.134557] systemd[1]: Condition check resulted in Store a System Token in an EFI Variable being skipped.
[  101.134809] systemd[1]: Condition check resulted in Commit a transient machine-id on disk being skipped.
[  101.134956] systemd[1]: Condition check resulted in Wait for ZFS Volume (zvol) links in /dev being skipped.
[  101.135105] systemd[1]: Reached target ZFS volumes are ready.
[  101.150780] systemd[1]: Finished Set console font and keymap.
[  101.152479] systemd[1]: Received SIGRTMIN+20 from PID 270 (plymouthd).
[  101.158406] systemd[1]: Finished Tell Plymouth To Write Out Runtime Data.
[  101.218457] systemd[1]: Finished Armbian leds state.
[  101.322310] zram: Added device: zram0
[  101.323687] zram: Added device: zram1
[  101.324852] zram: Added device: zram2
[  101.421341] zram0: detected capacity change from 0 to 3952888
[  101.461798] Adding 1976440k swap on /dev/zram0.  Priority:5 extents:1 across:1976440k SS
[  101.484675] systemd[1]: Finished Raise network interfaces.
[  101.696246] zram1: detected capacity change from 0 to 102400
[  101.755534] systemd[1]: Finished Armbian ZRAM config.
[  101.797108] systemd[1]: Starting Armbian memory supported logging...
[  101.897210] EXT4-fs (zram1): mounted filesystem 24de4176-fb3a-49d5-bf49-baf1c2b1b99b r/w without journal. Quota mode: none.
[  101.897287] ext4 filesystem being mounted at /var/log supports timestamps until 2038-01-19 (0x7fffffff)
[  117.158724] systemd[1]: Finished Armbian memory supported logging.
[  117.188825] systemd[1]: Starting Journal Service...
[  117.595993] systemd[1]: Started Journal Service.
[  117.680099] systemd-journald[657]: Received client request to flush runtime journal.
[  118.809081] systemd-journald[657]: Received client request to relinquish /var/log/journal/515cf686f9604730835651798a8c6e08 access.
[  120.219836] rk_gmac-dwmac fe300000.ethernet eth0: Register MEM_TYPE_PAGE_POOL RxQ-0
[  120.229515] rk_gmac-dwmac fe300000.ethernet eth0: PHY [stmmac-0:00] driver [RTL8211E Gigabit Ethernet] (irq=POLL)
[  120.229571] rk_gmac-dwmac fe300000.ethernet eth0: No Safety Features support found
[  120.229595] rk_gmac-dwmac fe300000.ethernet eth0: PTP not supported by HW
[  120.231528] rk_gmac-dwmac fe300000.ethernet eth0: configuring for phy/rgmii link mode
[  124.295179] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
[  124.301982] Bridge firewalling registered
[  124.328027] rk_gmac-dwmac fe300000.ethernet eth0: Link is Up - 1Gbps/Full - flow control off
[  125.022086] Initializing XFRM netlink socket
[  128.462731] systemd-journald[657]: Received client request to flush runtime journal.
[  129.138946] systemd-journald[657]: Received client request to relinquish /var/log/journal/515cf686f9604730835651798a8c6e08 access.
[  198.475560] systemd[2453]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
[  246.674030] systemd-journald[657]: Received client request to flush runtime journal.
[  246.750322] systemd-journald[657]: Received client request to relinquish /var/log/journal/515cf686f9604730835651798a8c6e08 access.
[ 1146.740058] systemd-journald[657]: Received client request to flush runtime journal.
[ 1146.808901] systemd-journald[657]: Received client request to relinquish /var/log/journal/515cf686f9604730835651798a8c6e08 access.
[ 2046.709361] systemd-journald[657]: Received client request to flush runtime journal.
[ 2046.789845] systemd-journald[657]: Received client request to relinquish /var/log/journal/515cf686f9604730835651798a8c6e08 access.
[ 2946.702604] systemd-journald[657]: Received client request to flush runtime journal.
[ 2946.775122] systemd-journald[657]: Received client request to relinquish /var/log/journal/515cf686f9604730835651798a8c6e08 access.
[ 3846.684206] systemd-journald[657]: Received client request to flush runtime journal.
[ 3846.776059] systemd-journald[657]: Received client request to relinquish /var/log/journal/515cf686f9604730835651798a8c6e08 access.
[ 4184.598033] md: resync of RAID array md127
[ 4189.064476] device-mapper: uevent: version 1.0.3
[ 4189.065162] device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
[ 4212.546008] EXT4-fs (dm-0): mounted filesystem 5602eb7e-12b3-42fb-9798-1556b2a90df5 r/w with ordered data mode. Quota mode: none.
[ 4746.765355] systemd-journald[657]: Received client request to flush runtime journal.
[ 4746.826781] systemd-journald[657]: Received client request to relinquish /var/log/journal/515cf686f9604730835651798a8c6e08 access.
[ 5462.651092] systemd-journald[657]: Received client request to flush runtime journal.
[ 5462.716166] systemd-journald[657]: Received client request to relinquish /var/log/journal/515cf686f9604730835651798a8c6e08 access.


vm.admin_reserve_kbytes = 8192
vm.compaction_proactiveness = 20
vm.compact_unevictable_allowed = 1
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 10
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 20
vm.dirtytime_expire_seconds = 43200
vm.dirty_writeback_centisecs = 500
vm.extfrag_threshold = 500
vm.hugetlb_shm_group = 0
vm.laptop_mode = 0
vm.legacy_va_layout = 0
vm.lowmem_reserve_ratio = 256	256	32	0
vm.max_map_count = 65530
vm.memfd_noexec = 0
vm.min_free_kbytes = 22528
vm.min_slab_ratio = 5
vm.min_unmapped_ratio = 1
vm.mmap_min_addr = 4096
vm.mmap_rnd_bits = 18
vm.mmap_rnd_compat_bits = 11
vm.nr_hugepages = 0
vm.nr_hugepages_mempolicy = 0
vm.nr_overcommit_hugepages = 0
vm.numa_stat = 1
vm.numa_zonelist_order = Node
vm.oom_dump_tasks = 1
vm.oom_kill_allocating_task = 0
vm.overcommit_kbytes = 0
vm.overcommit_memory = 0
vm.overcommit_ratio = 50
vm.page-cluster = 0
vm.page_lock_unfairness = 5
vm.panic_on_oom = 0
vm.percpu_pagelist_high_fraction = 0
vm.stat_interval = 1
vm.swappiness = 100
vm.user_reserve_kbytes = 121725
vm.vfs_cache_pressure = 100
vm.watermark_boost_factor = 15000
vm.watermark_scale_factor = 10
vm.zone_reclaim_mode = 0

### interrupts:
           CPU0       CPU1       CPU2       CPU3       CPU4       CPU5       
 18:          0          0          0          0          0          0     GICv3  25 Level     vgic
 20:          0          0          0          0          0          0     GICv3  27 Level     kvm guest vtimer
 23:      54633      65433      62030      49947     115779      61484     GICv3  30 Level     arch_timer
 25:      83504      64289      75597      60595      90417      72349     GICv3 113 Level     rk_timer
 31:          0          0          0          0          0          0     GICv3  37 Level     ff6d0000.dma-controller
 32:          0          0          0          0          0          0     GICv3  38 Level     ff6d0000.dma-controller
 33:          0          0          0          0          0          0     GICv3  39 Level     ff6e0000.dma-controller
 34:          0          0          0          0          0          0     GICv3  40 Level     ff6e0000.dma-controller
 35:       1236          0          0          0          0          0     GICv3 131 Level     ttyS0
 36:        693          0          0          0          0          0     GICv3 132 Level     ttyS2
 37:          0          0          0          0          0          0     GICv3 147 Level     ff650800.iommu
 38:          0          0          0          0          0          0     GICv3 149 Level     ff660480.iommu
 39:          0          0          0          0          0          0     GICv3 151 Level     ff8f3f00.iommu, ff8f0000.vop
 40:          0          0          0          0          0          0     GICv3 150 Level     ff903f00.iommu, ff900000.vop
 41:          0          0          0          0          0          0     GICv3  75 Level     ff914000.iommu
 42:          0          0          0          0          0          0     GICv3  76 Level     ff924000.iommu
 43:         66          0          0          0          0          0     GICv3  91 Level     ff110000.i2c
 44:          0          0          0          0          0          0     GICv3  66 Level     ff130000.i2c
 45:      61977          0          0          0          0          0     GICv3  68 Level     ff160000.i2c
 46:        429          0          0          0          0          0     GICv3  89 Level     ff3c0000.i2c
 47:          0          0          0          0          0          0     GICv3  88 Level     ff3d0000.i2c
 48:          0          0          0          0          0          0     GICv3 129 Level     rockchip_thermal
 49:          0          0          0          0          0          0     GICv3 152 Edge      ff848000.watchdog
 50:       9719          0          0          0          0          0     GICv3  96 Level     dw-mci
 51:          0          0          0          0          0          0     GICv3  97 Level     dw-mci
 52:      36031          0          0          0          0          0     GICv3  43 Level     mmc0
 53:          3          0          0          0          0          0     GICv3  94 Level     ff100000.saradc
 54:          0          0          0          0          0          0  GICv3-23   0 Level     arm-pmu
 55:          0          0          0          0          0          0  GICv3-23   1 Level     arm-pmu
 56:          0          0          0          0          0          0  rockchip_gpio_irq   7 Edge      fe320000.mmc cd
 57:          0          0          0          0          0          0     GICv3  59 Level     rockchip_usb2phy
 58:          0          0          0          0          0          0     GICv3  63 Level     rockchip_usb2phy
 59:          0          0          0          0          0          0     GICv3 137 Level     xhci-hcd:usb1
 60:          0          0          0          0          0          0     GICv3 142 Level     xhci-hcd:usb3
 61:          0          0          0          0          0          0     GICv3  58 Level     ehci_hcd:usb5
 62:          0          0          0          0          0          0     GICv3  62 Level     ehci_hcd:usb7
 63:          0          0          0          0          0          0     GICv3  60 Level     ohci_hcd:usb6
 64:          0          0          0          0          0          0  rockchip_gpio_irq  21 Level     rk808
 70:          0          0          0          0          0          0     rk808   5 Edge      RTC alarm
 74:          0          0          0          0          0          0     GICv3  64 Level     ohci_hcd:usb8
 75:          0          0          0          0          0          0     GICv3  81 Level     pcie-sys
 77:        154          0          0          0          0          0     GICv3  83 Level     pcie-client
 79:          0          0          0          0          0          0   ITS-MSI   0 Edge      PCIe PME, aerdrv
 80:          0     300801          0          0          0          0   ITS-MSI 524288 Edge      ahci0
 81:          0          0          0          0          0          0   ITS-MSI 524289 Edge      ahci1
 82:          0          0          0          0          0          0   ITS-MSI 524290 Edge      ahci2
 83:          0          0          0          0     166292          0   ITS-MSI 524291 Edge      ahci3
 84:          0          0          0          0          0          0   ITS-MSI 524292 Edge      ahci4
 88:          0          0          0          0          0          0     GICv3  55 Level     ff940000.hdmi, dw-hdmi-cec
 89:      25429          0          0          0          0          0     GICv3  44 Level     eth0
 90:          0          0          0          0          0          0     GICv3  87 Level     ff680000.rga
 91:          0          0          0          0          0          0  rockchip_gpio_irq   1 Level     es8316
 92:          0          0          0          0          0          0     GICv3 146 Level     ff650000.video-codec
 93:          0          0          0          0          0          0     GICv3 145 Level     ff650000.video-codec
 94:        335          0          0          0          0          0  rockchip_gpio_irq   3 Level     brcmf_oob_intr
 95:          0          0          0          0          0          0     GICv3 148 Level     ff660000.video-codec
 96:          0          0          0          0          0          0     GICv3  32 Level     rk-crypto
 97:          0          0          0          0          0          0     GICv3 167 Level     rk-crypto
 98:          0          0          0          0          0          0     GICv3  51 Level     panfrost-gpu
 99:          0          0          0          0          0          0     GICv3  53 Level     panfrost-mmu
100:          6          0          0          0          0          0  rockchip_gpio_irq   4 Edge      host_wake
101:          0          0          0          0          0          0     GICv3  52 Level     panfrost-job
102:          0          0          0          0          0          0  rockchip_gpio_irq   0 Edge      Headphone detection
IPI0:      7051       6118       5124       6091       4565       4011       Rescheduling interrupts
IPI1:    142253      90762     189514     163483     115662      95731       Function call interrupts
IPI2:         0          0          0          0          0          0       CPU stop interrupts
IPI3:         0          0          0          0          0          0       CPU stop (for crash dump) interrupts
IPI4:     22657      24662      28699      29222      29184      28897       Timer broadcast interrupts
IPI5:         0          0          0          0          0          0       IRQ work interrupts
IPI6:         0          0          0          0          0          0       CPU wake-up interrupts
Err:          0

 

Edited by zeyoner
Link to comment
Share on other sites

  • zeyoner changed the title to ZFS, header and kernel mismatch?
Posted (edited)

Did some more searching and came across a post of yours.

 

Quote

 

 

After sudo apt update && apt upgrade -y && linux-headers-current-rockchip64 Followed by some cleaning up and undoing of the other kernel. Reinstalled zfsutils-linux zfs-dkms zfs-zed packages and ran into errors.

 

Setting up zfs-dkms (2.0.3-9+deb11u1) ...
Removing old zfs-2.0.3 DKMS files...

------------------------------
Deleting module version: 2.0.3
completely from the DKMS tree.
------------------------------
Done.
Loading new zfs-2.0.3 DKMS files...
Building for 6.6.39-current-rockchip64
Building initial module for 6.6.39-current-rockchip64
configure: error:
        *** None of the expected "PDE_DATA" interfaces were detected.
        *** This may be because your kernel version is newer than what is
        *** supported, or you are using a patched custom kernel with
        *** incompatible modifications.
        ***
        *** ZFS Version: zfs-2.0.3-9+deb11u1
        *** Compatible Kernels: 3.10 - 5.10

Error! Bad return status for module build on kernel: 6.6.39-current-rockchip64 (aarch64)
Consult /var/lib/dkms/zfs/2.0.3/build/make.log for more information.
dpkg: error processing package zfs-dkms (--configure):
 installed zfs-dkms package post-installation script subprocess returned error exit status 10
dpkg: dependency problems prevent configuration of zfs-zed:
 zfs-zed depends on zfs-modules | zfs-dkms; however:
  Package zfs-modules is not installed.
  Package zfs-dkms which provides zfs-modules is not configured yet.
  Package zfs-dkms is not configured yet.

dpkg: error processing package zfs-zed (--configure):
 dependency problems - leaving unconfigured
Processing triggers for initramfs-tools (0.140) ...
update-initramfs: Generating /boot/initrd.img-6.6.39-current-rockchip64
update-initramfs: Armbian: Converting to u-boot format: /boot/uInitrd-6.6.39-current-rockchip64
Image Name:   uInitrd
Created:      Mon Aug  5 02:43:51 2024..............................................................................]
Image Type:   AArch64 Linux RAMDisk Image (gzip compressed)
Data Size:    20403257 Bytes = 19925.06 KiB = 19.46 MiB
Load Address: 00000000
Entry Point:  00000000
update-initramfs: Armbian: Symlinking /boot/uInitrd-6.6.39-current-rockchip64 to /boot/uInitrd
'/boot/uInitrd' -> 'uInitrd-6.6.39-current-rockchip64'
update-initramfs: Armbian: done.
Errors were encountered while processing:
 zfs-dkms
 zfs-zed
E: Sub-process /usr/bin/dpkg returned an error code (1)

 

/var/lib/dkms/zfs/2.0.3/build/make.log:

DKMS make.log for zfs-2.0.3 for kernel 6.6.39-current-rockchip64 (aarch64)
Mon 05 Aug 2024 02:42:54 AM EDT
make: *** No targets specified and no makefile found.  Stop.

 

 

Seems as though I have to use kernel 5.10...?

Edited by zeyoner
Link to comment
Share on other sites

  • Solution
2 hours ago, zeyoner said:

I have to use kernel 5.10...?

 

No. You need more recent ZFS, which comes with supported Armbian OS, which is Bookworm. Bullseye generally works, but we don't support it anymore as upstream is also deserting it. We ship ZFS v2.2.4, soon upgrading to 2.2.5, but only on:

 

image.png

 

5 hours ago, zeyoner said:
ii  linux-headers-5.10.0-31-arm64     5.10.221-1                     arm64        Header files for Linux 5.10.0-31-arm64
ii  linux-headers-5.10.0-31-common    5.10.221-1                     all          Common header files for Linux 5.10.0-31
ii  linux-headers-arm64               5.10.221-1                     arm64        Header files for Linux arm64 configuration (meta-package)
hi  linux-image-5.10.0-31-arm64       5.10.221-1                     arm64        Linux 5.10 for 64-bit ARMv8 machines (signed)


All those headers are useless, remove them, also generic kernel 5.10 as it probably won't even boot. You need:
 

apt install linux-headers-current-rockchip64

 

Fresh Bookworm image, same (sub)arch:

Spoiler
Loading new zfs-2.2.4 DKMS files...
Building for 6.6.39-current-rockchip64
Building initial module for 6.6.39-current-rockchip64
Done.

zfs.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/6.6.39-current-rockchip64/updates/dkms/

spl.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/6.6.39-current-rockchip64/updates/dkms/
depmod.....
root@odroidm1:/home/igorp# modinfo zfs
filename:       /lib/modules/6.6.39-current-rockchip64/updates/dkms/zfs.ko
version:        2.2.4-1~bpo12+1
license:        CDDL
license:        Dual BSD/GPL
license:        Dual MIT/GPL
author:         OpenZFS
description:    ZFS
alias:          zzstd
alias:          zcommon
alias:          zunicode
alias:          znvpair
alias:          zlua
alias:          icp
alias:          zavl
alias:          devname:zfs
alias:          char-major-10-249
srcversion:     0242C2AA732639906F5EB42
depends:        spl
name:           zfs
vermagic:       6.6.39-current-rockchip64 SMP preempt mod_unload aarch64
parm:           zvol_inhibit_dev:Do not create zvol device nodes (uint)
parm:           zvol_major:Major number for zvol device (uint)
parm:           zvol_threads:Number of threads to handle I/O requests. Setto 0 to use all active CPUs (uint)
parm:           zvol_request_sync:Synchronously handle bio requests (uint)
parm:           zvol_max_discard_blocks:Max number of blocks to discard (ulong)
parm:           zvol_num_taskqs:Number of zvol taskqs (uint)
parm:           zvol_prefetch_bytes:Prefetch N bytes at zvol start+end (uint)
parm:           zvol_volmode:Default volmode property value (uint)
parm:           zvol_blk_mq_queue_depth:Default blk-mq queue depth (uint)
parm:           zvol_use_blk_mq:Use the blk-mq API for zvols (uint)
parm:           zvol_blk_mq_blocks_per_thread:Process volblocksize blocks per thread (uint)
parm:           zvol_open_timeout_ms:Timeout for ZVOL open retries (uint)
parm:           zfs_xattr_compat:Use legacy ZFS xattr naming for writing new user namespace xattrs
parm:           zfs_fallocate_reserve_percent:Percentage of length to use for the available capacity check (uint)
parm:           zfs_key_max_salt_uses:Max number of times a salt value can be used for generating encryption keys before it is rotated (ulong)
parm:           zfs_object_mutex_size:Size of znode hold array (uint)
parm:           zfs_unlink_suspend_progress:Set to prevent async unlinks (debug - leaks space into the unlinked set) (int)
parm:           zfs_delete_blocks:Delete files larger than N blocks async (ulong)
parm:           zfs_dbgmsg_enable:Enable ZFS debug message log (int)
parm:           zfs_dbgmsg_maxsize:Maximum ZFS debug log size (uint)
parm:           zfs_admin_snapshot:Enable mkdir/rmdir/mv in .zfs/snapshot (int)
parm:           zfs_expire_snapshot:Seconds to expire .zfs/snapshot (int)
parm:           vdev_file_logical_ashift:Logical ashift for file-based devices
parm:           vdev_file_physical_ashift:Physical ashift for file-based devices
parm:           zfs_vdev_scheduler:I/O scheduler
parm:           zfs_vdev_open_timeout_ms:Timeout before determining that a device is missing
parm:           zfs_vdev_failfast_mask:Defines failfast mask: 1 - device, 2 - transport, 4 - driver
parm:           zfs_vdev_disk_max_segs:Maximum number of data segments to add to an IO request (min 4)
parm:           zfs_vdev_disk_classic:Use classic BIO submission method
parm:           zfs_arc_shrinker_limit:Limit on number of pages that ARC shrinker can reclaim at once
parm:           zfs_abd_scatter_enabled:Toggle whether ABD allocations must be linear. (int)
parm:           zfs_abd_scatter_min_size:Minimum size of scatter allocations. (int)
parm:           zfs_abd_scatter_max_order:Maximum order allocation used for a scatter ABD. (uint)
parm:           zio_slow_io_ms:Max I/O completion time (milliseconds) before marking it as slow
parm:           zio_requeue_io_start_cut_in_line:Prioritize requeued I/O
parm:           zfs_sync_pass_deferred_free:Defer frees starting in this pass
parm:           zfs_sync_pass_dont_compress:Don't compress starting in this pass
parm:           zfs_sync_pass_rewrite:Rewrite new bps starting in this pass
parm:           zio_dva_throttle_enabled:Throttle block allocations in the ZIO pipeline
parm:           zio_deadman_log_all:Log all slow ZIOs, not just those with vdevs
parm:           zfs_commit_timeout_pct:ZIL block open timeout percentage
parm:           zil_replay_disable:Disable intent logging replay
parm:           zil_nocacheflush:Disable ZIL cache flushes
parm:           zil_slog_bulk:Limit in bytes slog sync writes per commit
parm:           zil_maxblocksize:Limit in bytes of ZIL log block size
parm:           zil_maxcopied:Limit in bytes WR_COPIED size
parm:           zfs_vnops_read_chunk_size:Bytes to read per chunk
parm:           zfs_bclone_enabled:Enable block cloning
parm:           zfs_bclone_wait_dirty:Wait for dirty blocks when cloning
parm:           zfs_zil_saxattr:Disable xattr=sa extended attribute logging in ZIL by settng 0.
parm:           zfs_immediate_write_sz:Largest data block to write to zil
parm:           zfs_max_nvlist_src_size:Maximum size in bytes allowed for src nvlist passed with ZFS ioctls
parm:           zfs_history_output_max:Maximum size in bytes of ZFS ioctl output that will be logged
parm:           zfs_zevent_retain_max:Maximum recent zevents records to retain for duplicate checking
parm:           zfs_zevent_retain_expire_secs:Expiration time for recent zevents records
parm:           zfs_lua_max_instrlimit:Max instruction limit that can be specified for a channel program
parm:           zfs_lua_max_memlimit:Max memory limit that can be specified for a channel program
parm:           zap_micro_max_size:Maximum micro ZAP size, before converting to a fat ZAP, in bytes
parm:           zap_iterate_prefetch:When iterating ZAP object, prefetch it
parm:           zfs_trim_extent_bytes_max:Max size of TRIM commands, larger will be split
parm:           zfs_trim_extent_bytes_min:Min size of TRIM commands, smaller will be skipped
parm:           zfs_trim_metaslab_skip:Skip metaslabs which have never been initialized
parm:           zfs_trim_txg_batch:Min number of txgs to aggregate frees before issuing TRIM
parm:           zfs_trim_queue_limit:Max queued TRIMs outstanding per leaf vdev
parm:           zfs_removal_ignore_errors:Ignore hard IO errors when removing device
parm:           zfs_remove_max_segment:Largest contiguous segment to allocate when removing device
parm:           vdev_removal_max_span:Largest span of free chunks a remap segment can span
parm:           zfs_removal_suspend_progress:Pause device removal after this many bytes are copied (debug use only - causes removal to hang)
parm:           zfs_rebuild_max_segment:Max segment size in bytes of rebuild reads
parm:           zfs_rebuild_vdev_limit:Max bytes in flight per leaf vdev for sequential resilvers
parm:           zfs_rebuild_scrub_enabled:Automatically scrub after sequential resilver completes
parm:           zfs_vdev_raidz_impl:Select raidz implementation.
parm:           zfs_vdev_aggregation_limit:Max vdev I/O aggregation size
parm:           zfs_vdev_aggregation_limit_non_rotating:Max vdev I/O aggregation size for non-rotating media
parm:           zfs_vdev_read_gap_limit:Aggregate read I/O over gap
parm:           zfs_vdev_write_gap_limit:Aggregate write I/O over gap
parm:           zfs_vdev_max_active:Maximum number of active I/Os per vdev
parm:           zfs_vdev_async_write_active_max_dirty_percent:Async write concurrency max threshold
parm:           zfs_vdev_async_write_active_min_dirty_percent:Async write concurrency min threshold
parm:           zfs_vdev_async_read_max_active:Max active async read I/Os per vdev
parm:           zfs_vdev_async_read_min_active:Min active async read I/Os per vdev
parm:           zfs_vdev_async_write_max_active:Max active async write I/Os per vdev
parm:           zfs_vdev_async_write_min_active:Min active async write I/Os per vdev
parm:           zfs_vdev_initializing_max_active:Max active initializing I/Os per vdev
parm:           zfs_vdev_initializing_min_active:Min active initializing I/Os per vdev
parm:           zfs_vdev_removal_max_active:Max active removal I/Os per vdev
parm:           zfs_vdev_removal_min_active:Min active removal I/Os per vdev
parm:           zfs_vdev_scrub_max_active:Max active scrub I/Os per vdev
parm:           zfs_vdev_scrub_min_active:Min active scrub I/Os per vdev
parm:           zfs_vdev_sync_read_max_active:Max active sync read I/Os per vdev
parm:           zfs_vdev_sync_read_min_active:Min active sync read I/Os per vdev
parm:           zfs_vdev_sync_write_max_active:Max active sync write I/Os per vdev
parm:           zfs_vdev_sync_write_min_active:Min active sync write I/Os per vdev
parm:           zfs_vdev_trim_max_active:Max active trim/discard I/Os per vdev
parm:           zfs_vdev_trim_min_active:Min active trim/discard I/Os per vdev
parm:           zfs_vdev_rebuild_max_active:Max active rebuild I/Os per vdev
parm:           zfs_vdev_rebuild_min_active:Min active rebuild I/Os per vdev
parm:           zfs_vdev_nia_credit:Number of non-interactive I/Os to allow in sequence
parm:           zfs_vdev_nia_delay:Number of non-interactive I/Os before _max_active
parm:           zfs_vdev_queue_depth_pct:Queue depth percentage for each top-level vdev
parm:           zfs_vdev_def_queue_depth:Default queue depth for each allocator
parm:           zfs_vdev_mirror_rotating_inc:Rotating media load increment for non-seeking I/Os
parm:           zfs_vdev_mirror_rotating_seek_inc:Rotating media load increment for seeking I/Os
parm:           zfs_vdev_mirror_rotating_seek_offset:Offset in bytes from the last I/O which triggers a reduced rotating media seek increment
parm:           zfs_vdev_mirror_non_rotating_inc:Non-rotating media load increment for non-seeking I/Os
parm:           zfs_vdev_mirror_non_rotating_seek_inc:Non-rotating media load increment for seeking I/Os
parm:           zfs_initialize_value:Value written during zpool initialize
parm:           zfs_initialize_chunk_size:Size in bytes of writes by zpool initialize
parm:           zfs_condense_indirect_vdevs_enable:Whether to attempt condensing indirect vdev mappings
parm:           zfs_condense_indirect_obsolete_pct:Minimum obsolete percent of bytes in the mapping to attempt condensing
parm:           zfs_condense_min_mapping_bytes:Don't bother condensing if the mapping uses less than this amount of memory
parm:           zfs_condense_max_obsolete_bytes:Minimum size obsolete spacemap to attempt condensing
parm:           zfs_condense_indirect_commit_entry_delay_ms:Used by tests to ensure certain actions happen in the middle of a condense. A maximum value of 1 should be sufficient.
parm:           zfs_reconstruct_indirect_combinations_max:Maximum number of combinations when reconstructing split segments
parm:           zfs_vdev_default_ms_count:Target number of metaslabs per top-level vdev
parm:           zfs_vdev_default_ms_shift:Default lower limit for metaslab size
parm:           zfs_vdev_max_ms_shift:Default upper limit for metaslab size
parm:           zfs_vdev_min_ms_count:Minimum number of metaslabs per top-level vdev
parm:           zfs_vdev_ms_count_limit:Practical upper limit of total metaslabs per top-level vdev
parm:           zfs_slow_io_events_per_second:Rate limit slow IO (delay) events to this many per second
parm:           zfs_checksum_events_per_second:Rate limit checksum events to this many checksum errors per second (do not set below ZED threshold).
parm:           zfs_scan_ignore_errors:Ignore errors during resilver/scrub
parm:           vdev_validate_skip:Bypass vdev_validate()
parm:           zfs_nocacheflush:Disable cache flushes
parm:           zfs_embedded_slog_min_ms:Minimum number of metaslabs required to dedicate one for log blocks
parm:           zfs_vdev_min_auto_ashift:Minimum ashift used when creating new top-level vdevs
parm:           zfs_vdev_max_auto_ashift:Maximum ashift used when optimizing for logical -> physical sector size on new top-level vdevs
parm:           zfs_txg_timeout:Max seconds worth of delta per txg
parm:           zfs_read_history:Historical statistics for the last N reads
parm:           zfs_read_history_hits:Include cache hits in read history
parm:           zfs_txg_history:Historical statistics for the last N txgs
parm:           zfs_multihost_history:Historical statistics for last N multihost writes
parm:           zfs_flags:Set additional debugging flags
parm:           zfs_recover:Set to attempt to recover from fatal errors
parm:           zfs_free_leak_on_eio:Set to ignore IO errors during free and permanently leak the space
parm:           zfs_deadman_checktime_ms:Dead I/O check interval in milliseconds
parm:           zfs_deadman_enabled:Enable deadman timer
parm:           spa_asize_inflation:SPA size estimate multiplication factor
parm:           zfs_ddt_data_is_special:Place DDT data into the special class
parm:           zfs_user_indirect_is_special:Place user data indirect blocks into the special class
parm:           zfs_deadman_failmode:Failmode for deadman timer
parm:           zfs_deadman_synctime_ms:Pool sync expiration time in milliseconds
parm:           zfs_deadman_ziotime_ms:IO expiration time in milliseconds
parm:           zfs_special_class_metadata_reserve_pct:Small file blocks in special vdevs depends on this much free space available
parm:           spa_slop_shift:Reserved free space in pool
parm:           zfs_unflushed_max_mem_amt:Specific hard-limit in memory that ZFS allows to be used for unflushed changes
parm:           zfs_unflushed_max_mem_ppm:Percentage of the overall system memory that ZFS allows to be used for unflushed changes (value is calculated over 1000000 for finer granularity)
parm:           zfs_unflushed_log_block_max:Hard limit (upper-bound) in the size of the space map log in terms of blocks.
parm:           zfs_unflushed_log_block_min:Lower-bound limit for the maximum amount of blocks allowed in log spacemap (see zfs_unflushed_log_block_max)
parm:           zfs_unflushed_log_txg_max:Hard limit (upper-bound) in the size of the space map log in terms of dirty TXGs.
parm:           zfs_unflushed_log_block_pct:Tunable used to determine the number of blocks that can be used for the spacemap log, expressed as a percentage of the total number of metaslabs in the pool (e.g. 400 means the number of log blocks is capped at 4 times the number of metaslabs)
parm:           zfs_max_log_walking:The number of past TXGs that the flushing algorithm of the log spacemap feature uses to estimate incoming log blocks
parm:           zfs_keep_log_spacemaps_at_export:Prevent the log spacemaps from being flushed and destroyed during pool export/destroy
parm:           zfs_max_logsm_summary_length:Maximum number of rows allowed in the summary of the spacemap log
parm:           zfs_min_metaslabs_to_flush:Minimum number of metaslabs to flush per dirty TXG
parm:           spa_upgrade_errlog_limit:Limit the number of errors which will be upgraded to the new on-disk error log when enabling head_errlog
parm:           spa_config_path:SPA config file (/etc/zfs/zpool.cache)
parm:           zfs_autoimport_disable:Disable pool import at module load
parm:           zfs_spa_discard_memory_limit:Limit for memory used in prefetching the checkpoint space map done on each vdev while discarding the checkpoint
parm:           metaslab_preload_pct:Percentage of CPUs to run a metaslab preload taskq
parm:           spa_load_verify_shift:log2 fraction of arc that can be used by inflight I/Os when verifying pool during import
parm:           spa_load_verify_metadata:Set to traverse metadata on pool import
parm:           spa_load_verify_data:Set to traverse data on pool import
parm:           spa_load_print_vdev_tree:Print vdev tree to zfs_dbgmsg during pool import
parm:           zio_taskq_batch_pct:Percentage of CPUs to run an IO worker thread
parm:           zio_taskq_batch_tpq:Number of threads per IO worker taskqueue
parm:           zfs_max_missing_tvds:Allow importing pool with up to this number of missing top-level vdevs (in read-only mode)
parm:           zfs_livelist_condense_zthr_pause:Set the livelist condense zthr to pause
parm:           zfs_livelist_condense_sync_pause:Set the livelist condense synctask to pause
parm:           zfs_livelist_condense_sync_cancel:Whether livelist condensing was canceled in the synctask
parm:           zfs_livelist_condense_zthr_cancel:Whether livelist condensing was canceled in the zthr function
parm:           zfs_livelist_condense_new_alloc:Whether extra ALLOC blkptrs were added to a livelist entry while it was being condensed
parm:           zio_taskq_read:Configure IO queues for read IO
parm:           zio_taskq_write:Configure IO queues for write IO
parm:           zfs_multilist_num_sublists:Number of sublists used in each multilist
parm:           zfs_multihost_interval:Milliseconds between mmp writes to each leaf
parm:           zfs_multihost_fail_intervals:Max allowed period without a successful mmp write
parm:           zfs_multihost_import_intervals:Number of zfs_multihost_interval periods to wait for activity
parm:           metaslab_aliquot:Allocation granularity (a.k.a. stripe size)
parm:           metaslab_debug_load:Load all metaslabs when pool is first opened
parm:           metaslab_debug_unload:Prevent metaslabs from being unloaded
parm:           metaslab_preload_enabled:Preload potential metaslabs during reassessment
parm:           metaslab_preload_limit:Max number of metaslabs per group to preload
parm:           metaslab_unload_delay:Delay in txgs after metaslab was last used before unloading
parm:           metaslab_unload_delay_ms:Delay in milliseconds after metaslab was last used before unloading
parm:           zfs_mg_noalloc_threshold:Percentage of metaslab group size that should be free to make it eligible for allocation
parm:           zfs_mg_fragmentation_threshold:Percentage of metaslab group size that should be considered eligible for allocations unless all metaslab groups within the metaslab class have also crossed this threshold
parm:           metaslab_fragmentation_factor_enabled:Use the fragmentation metric to prefer less fragmented metaslabs
parm:           zfs_metaslab_fragmentation_threshold:Fragmentation for metaslab to allow allocation
parm:           metaslab_lba_weighting_enabled:Prefer metaslabs with lower LBAs
parm:           metaslab_bias_enabled:Enable metaslab group biasing
parm:           zfs_metaslab_segment_weight_enabled:Enable segment-based metaslab selection
parm:           zfs_metaslab_switch_threshold:Segment-based metaslab selection maximum buckets before switching
parm:           metaslab_force_ganging:Blocks larger than this size are sometimes forced to be gang blocks
parm:           metaslab_force_ganging_pct:Percentage of large blocks that will be forced to be gang blocks
parm:           metaslab_df_max_search:Max distance (bytes) to search forward before using size tree
parm:           metaslab_df_use_largest_segment:When looking in size tree, use largest segment instead of exact fit
parm:           zfs_metaslab_max_size_cache_sec:How long to trust the cached max chunk size of a metaslab
parm:           zfs_metaslab_mem_limit:Percentage of memory that can be used to store metaslab range trees
parm:           zfs_metaslab_try_hard_before_gang:Try hard to allocate before ganging
parm:           zfs_metaslab_find_max_tries:Normally only consider this many of the best metaslabs in each vdev
parm:           zfs_zevent_len_max:Max event queue length
parm:           zfs_scan_vdev_limit:Max bytes in flight per leaf vdev for scrubs and resilvers
parm:           zfs_scrub_min_time_ms:Min millisecs to scrub per txg
parm:           zfs_obsolete_min_time_ms:Min millisecs to obsolete per txg
parm:           zfs_free_min_time_ms:Min millisecs to free per txg
parm:           zfs_resilver_min_time_ms:Min millisecs to resilver per txg
parm:           zfs_scan_suspend_progress:Set to prevent scans from progressing
parm:           zfs_no_scrub_io:Set to disable scrub I/O
parm:           zfs_no_scrub_prefetch:Set to disable scrub prefetching
parm:           zfs_async_block_max_blocks:Max number of blocks freed in one txg
parm:           zfs_max_async_dedup_frees:Max number of dedup blocks freed in one txg
parm:           zfs_free_bpobj_enabled:Enable processing of the free_bpobj
parm:           zfs_scan_blkstats:Enable block statistics calculation during scrub
parm:           zfs_scan_mem_lim_fact:Fraction of RAM for scan hard limit
parm:           zfs_scan_issue_strategy:IO issuing strategy during scrubbing. 0 = default, 1 = LBA, 2 = size
parm:           zfs_scan_legacy:Scrub using legacy non-sequential method
parm:           zfs_scan_checkpoint_intval:Scan progress on-disk checkpointing interval
parm:           zfs_scan_max_ext_gap:Max gap in bytes between sequential scrub / resilver I/Os
parm:           zfs_scan_mem_lim_soft_fact:Fraction of hard limit used as soft limit
parm:           zfs_scan_strict_mem_lim:Tunable to attempt to reduce lock contention
parm:           zfs_scan_fill_weight:Tunable to adjust bias towards more filled segments during scans
parm:           zfs_scan_report_txgs:Tunable to report resilver performance over the last N txgs
parm:           zfs_resilver_disable_defer:Process all resilvers immediately
parm:           zfs_scrub_error_blocks_per_txg:Error blocks to be scrubbed in one txg
parm:           zfs_dirty_data_max_percent:Max percent of RAM allowed to be dirty
parm:           zfs_dirty_data_max_max_percent:zfs_dirty_data_max upper bound as % of RAM
parm:           zfs_delay_min_dirty_percent:Transaction delay threshold
parm:           zfs_dirty_data_max:Determines the dirty space limit
parm:           zfs_wrlog_data_max:The size limit of write-transaction zil log data
parm:           zfs_dirty_data_max_max:zfs_dirty_data_max upper bound in bytes
parm:           zfs_dirty_data_sync_percent:Dirty data txg sync threshold as a percentage of zfs_dirty_data_max
parm:           zfs_delay_scale:How quickly delay approaches infinity
parm:           zfs_sync_taskq_batch_pct:Max percent of CPUs that are used to sync dirty data
parm:           zfs_zil_clean_taskq_nthr_pct:Max percent of CPUs that are used per dp_sync_taskq
parm:           zfs_zil_clean_taskq_minalloc:Number of taskq entries that are pre-populated
parm:           zfs_zil_clean_taskq_maxalloc:Max number of taskq entries that are cached
parm:           zvol_enforce_quotas:Enable strict ZVOL quota enforcment
parm:           zfs_livelist_max_entries:Size to start the next sub-livelist in a livelist
parm:           zfs_livelist_min_percent_shared:Threshold at which livelist is disabled
parm:           zfs_max_recordsize:Max allowed record size
parm:           zfs_allow_redacted_dataset_mount:Allow mounting of redacted datasets
parm:           zfs_snapshot_history_enabled:Include snapshot events in pool history/events
parm:           zfs_disable_ivset_guid_check:Set to allow raw receives without IVset guids
parm:           zfs_default_bs:Default dnode block shift
parm:           zfs_default_ibs:Default dnode indirect block shift
parm:           zfs_prefetch_disable:Disable all ZFS prefetching
parm:           zfetch_max_streams:Max number of streams per zfetch
parm:           zfetch_min_sec_reap:Min time before stream reclaim
parm:           zfetch_max_sec_reap:Max time before stream delete
parm:           zfetch_min_distance:Min bytes to prefetch per stream
parm:           zfetch_max_distance:Max bytes to prefetch per stream
parm:           zfetch_max_idistance:Max bytes to prefetch indirects for per stream
parm:           zfetch_max_reorder:Max request reorder distance within a stream
parm:           zfetch_hole_shift:Max log2 fraction of holes in a stream
parm:           zfs_pd_bytes_max:Max number of bytes to prefetch
parm:           zfs_traverse_indirect_prefetch_limit:Traverse prefetch number of blocks pointed by indirect block
parm:           ignore_hole_birth:Alias for send_holes_without_birth_time (int)
parm:           send_holes_without_birth_time:Ignore hole_birth txg for zfs send
parm:           zfs_send_corrupt_data:Allow sending corrupt data
parm:           zfs_send_queue_length:Maximum send queue length
parm:           zfs_send_unmodified_spill_blocks:Send unmodified spill blocks
parm:           zfs_send_no_prefetch_queue_length:Maximum send queue length for non-prefetch queues
parm:           zfs_send_queue_ff:Send queue fill fraction
parm:           zfs_send_no_prefetch_queue_ff:Send queue fill fraction for non-prefetch queues
parm:           zfs_override_estimate_recordsize:Override block size estimate with fixed size
parm:           zfs_recv_queue_length:Maximum receive queue length
parm:           zfs_recv_queue_ff:Receive queue fill fraction
parm:           zfs_recv_write_batch_size:Maximum amount of writes to batch into one transaction
parm:           zfs_recv_best_effort_corrective:Ignore errors during corrective receive
parm:           dmu_object_alloc_chunk_shift:CPU-specific allocator grabs 2^N objects at once
parm:           zfs_nopwrite_enabled:Enable NOP writes
parm:           zfs_per_txg_dirty_frees_percent:Percentage of dirtied blocks from frees in one TXG
parm:           zfs_dmu_offset_next_sync:Enable forcing txg sync to find holes
parm:           dmu_prefetch_max:Limit one prefetch call to this size
parm:           ddt_zap_default_bs:DDT ZAP leaf blockshift
parm:           ddt_zap_default_ibs:DDT ZAP indirect blockshift
parm:           zfs_dedup_prefetch:Enable prefetching dedup-ed blks
parm:           zfs_dbuf_state_index:Calculate arc header index
parm:           dbuf_cache_max_bytes:Maximum size in bytes of the dbuf cache.
parm:           dbuf_cache_hiwater_pct:Percentage over dbuf_cache_max_bytes for direct dbuf eviction.
parm:           dbuf_cache_lowater_pct:Percentage below dbuf_cache_max_bytes when dbuf eviction stops.
parm:           dbuf_metadata_cache_max_bytes:Maximum size in bytes of dbuf metadata cache.
parm:           dbuf_cache_shift:Set size of dbuf cache to log2 fraction of arc size.
parm:           dbuf_metadata_cache_shift:Set size of dbuf metadata cache to log2 fraction of arc size.
parm:           dbuf_mutex_cache_shift:Set size of dbuf cache mutex array as log2 shift.
parm:           zfs_btree_verify_intensity:Enable btree verification. Levels above 4 require ZFS be built with debugging
parm:           brt_zap_prefetch:Enable prefetching of BRT ZAP entries
parm:           brt_zap_default_bs:BRT ZAP leaf blockshift
parm:           brt_zap_default_ibs:BRT ZAP indirect blockshift
parm:           zfs_arc_min:Minimum ARC size in bytes
parm:           zfs_arc_max:Maximum ARC size in bytes
parm:           zfs_arc_meta_balance:Balance between metadata and data on ghost hits.
parm:           zfs_arc_grow_retry:Seconds before growing ARC size
parm:           zfs_arc_shrink_shift:log2(fraction of ARC to reclaim)
parm:           zfs_arc_pc_percent:Percent of pagecache to reclaim ARC to
parm:           zfs_arc_average_blocksize:Target average block size
parm:           zfs_compressed_arc_enabled:Disable compressed ARC buffers
parm:           zfs_arc_min_prefetch_ms:Min life of prefetch block in ms
parm:           zfs_arc_min_prescient_prefetch_ms:Min life of prescient prefetched block in ms
parm:           l2arc_write_max:Max write bytes per interval
parm:           l2arc_write_boost:Extra write bytes during device warmup
parm:           l2arc_headroom:Number of max device writes to precache
parm:           l2arc_headroom_boost:Compressed l2arc_headroom multiplier
parm:           l2arc_trim_ahead:TRIM ahead L2ARC write size multiplier
parm:           l2arc_feed_secs:Seconds between L2ARC writing
parm:           l2arc_feed_min_ms:Min feed interval in milliseconds
parm:           l2arc_noprefetch:Skip caching prefetched buffers
parm:           l2arc_feed_again:Turbo L2ARC warmup
parm:           l2arc_norw:No reads during writes
parm:           l2arc_meta_percent:Percent of ARC size allowed for L2ARC-only headers
parm:           l2arc_rebuild_enabled:Rebuild the L2ARC when importing a pool
parm:           l2arc_rebuild_blocks_min_l2size:Min size in bytes to write rebuild log blocks in L2ARC
parm:           l2arc_mfuonly:Cache only MFU data from ARC into L2ARC
parm:           l2arc_exclude_special:Exclude dbufs on special vdevs from being cached to L2ARC if set.
parm:           zfs_arc_lotsfree_percent:System free memory I/O throttle in bytes
parm:           zfs_arc_sys_free:System free memory target size in bytes
parm:           zfs_arc_dnode_limit:Minimum bytes of dnodes in ARC
parm:           zfs_arc_dnode_limit_percent:Percent of ARC meta buffers for dnodes
parm:           zfs_arc_dnode_reduce_percent:Percentage of excess dnodes to try to unpin
parm:           zfs_arc_eviction_pct:When full, ARC allocation waits for eviction of this % of alloc size
parm:           zfs_arc_evict_batch_limit:The number of headers to evict per sublist before moving to the next
parm:           zfs_arc_prune_task_threads:Number of arc_prune threads
parm:           zstd_earlyabort_pass:Enable early abort attempts when using zstd
parm:           zstd_abort_size:Minimal size of block to attempt early abort
parm:           zfs_max_dataset_nesting:Limit to the amount of nesting a path can have. Defaults to 50.
parm:           zfs_fletcher_4_impl:Select fletcher 4 implementation.
parm:           zfs_sha512_impl:Select SHA512 implementation.
parm:           zfs_sha256_impl:Select SHA256 implementation.
parm:           icp_gcm_impl:Select gcm implementation.
parm:           zfs_blake3_impl:Select BLAKE3 implementation.
parm:           icp_aes_impl:Select aes implementation.

 

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines