Jump to content

grek

Members
  • Posts

    19
  • Joined

  • Last visited

Posts posted by grek

  1. I installed Armbian_23.5.4_Helios64_bookworm_current_6.1.36 and kernel helios64 6.6.8-edge-rockchip64.

     

    Generally it is ok. First from long time I do not need to limit my sata to 3gbit , and I using cpu governor : on demand  with min: 408000 max: 1200000 . ,instead of performance 1416000

    My configuration its little specific because Im using LUKS crypt on top of my ZFS mirror (2x 4TB) + 1 cache SSD + 2 hdd EXT4 encrypted with ZFS.

     

    I tested NFS yesterday , and I successfully downloaded ~ 700GB without any errors .

     

    only what is not working as expected, that running sbc-bench.sh , causing kernel panic . ( http://sprunge.us/3VLJNX ) 

     

    I will still test it, but it good point to start.

     

     

    [ 3500.389590] Internal error: Oops: 0000000086000004 [#1] PREEMPT SMP
    [ 3500.390166] Modules linked in: governor_powersave governor_performance dm_crypt trusted asn1_encoder nfsv3 nfs fscache netfs rfkill lz4hc lz4 zram binfmt_misc zfs(PO) spl(O) snd_soc_hdmi_codec snd_soc_rockchip_i2s crct10dif_ce rockchip_vdec(C) hantro_vpu snd_soc_core rockchip_rga v4l2_vp9 leds_pwm snd_compress v4l2_h264 ac97_bus videobuf2_dma_contig v4l2_mem2mem snd_pcm_dmaengine videobuf2_dma_sg snd_pcm pwm_fan panfrost gpu_sched gpio_charger drm_shmem_helper videobuf2_memops videobuf2_v4l2 snd_timer rk_crypto videodev snd videobuf2_common nvmem_rockchip_efuse mc soundcore gpio_beeper ledtrig_netdev nfsd lm75 auth_rpcgss nfs_acl lockd grace sunrpc ip_tables x_tables autofs4 efivarfs raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx raid1 raid0 multipath linear cdc_ncm cdc_ether usbnet r8152 rockchipdrm realtek dw_mipi_dsi dw_hdmi analogix_dp fusb302 drm_display_helper tcpm cec typec dwmac_rk stmmac_platform drm_dma_helper drm_kms_helper stmmac drm adc_keys pcs_xpcs
    [ 3500.398227] CPU: 4 PID: 0 Comm: swapper/4 Tainted: P         C O       6.6.8-edge-rockchip64 #1
    [ 3500.399007] Hardware name: Helios64 (DT)
    [ 3500.399362] pstate: 000000c5 (nzcv daIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
    [ 3500.399988] pc : psi_group_change+0xc8/0x358
    [ 3500.400388] lr : psi_task_change+0x8c/0xbc
    [ 3500.400767] sp : ffff80008203bd60
    [ 3500.401069] x29: ffff80008203bd70 x28: ffff800075e65000 x27: ffff800081cac8e8
    [ 3500.401720] x26: ffff800081909d40 x25: 0000000000000000 x24: ffff800081907008
    [ 3500.402368] x23: 0000000000000001 x22: 0000000000000000 x21: 0000000000000000
    [ 3500.403015] x20: ffff0000f776ed40 x19: 0000000000000004 x18: 0000000000000000
    [ 3500.403662] x17: ffff800075e65000 x16: ffff800082038000 x15: 0000000000000001
    [ 3500.404309] x14: 0000000000000000 x13: ffff800081581d50 x12: ffff800081c89b68
    [ 3500.404957] x11: 0000000000000040 x10: 0000000000000004 x9 : 0000032eff793e13
    [ 3500.405605] x8 : ffff800081c8dd28 x7 : 0000000000000000 x6 : 000000001c036006
    [ 3500.406252] x5 : 0000000000000001 x4 : 0000000000000001 x3 : 0000000000000001
    [ 3500.406898] x2 : 0000000000000001 x1 : 0000000000000004 x0 : ffff0000f776ed40
    [ 3500.407545] Call trace:
    [ 3500.407771]  psi_group_change+0xc8/0x358
    [ 3500.408131]  psi_task_change+0x8c/0xbc
    [ 3500.408479]  activate_task+0xcc/0x128
    [ 3500.408818]  ttwu_do_activate+0x58/0x274
    [ 3500.409180]  sched_ttwu_pending+0xe4/0x1c0
    [ 3500.409557]  __flush_smp_call_function_queue+0x14c/0x458
    [ 3500.410042]  generic_smp_call_function_single_interrupt+0x14/0x20
    [ 3500.410594]  ipi_handler+0x84/0x250
    [ 3500.410920]  handle_percpu_devid_irq+0xa4/0x230
    [ 3500.411337]  generic_handle_domain_irq+0x2c/0x44
    [ 3500.411756]  gic_handle_irq+0x50/0x128
    [ 3500.412097]  call_on_irq_stack+0x24/0x4c
    [ 3500.412458]  do_interrupt_handler+0xd4/0xd8
    [ 3500.412842]  el1_interrupt+0x34/0x68
    [ 3500.413173]  el1h_64_irq_handler+0x18/0x24
    [ 3500.413548]  el1h_64_irq+0x64/0x68
    [ 3500.413860]  cpuidle_enter_state+0xc0/0x4bc
    [ 3500.414246]  cpuidle_enter+0x38/0x50
    [ 3500.414577]  do_idle+0x1fc/0x270
    [ 3500.414878]  cpu_startup_entry+0x38/0x3c
    [ 3500.415238]  secondary_start_kernel+0x128/0x148
    [ 3500.415653]  __secondary_switched+0xb8/0xbc
    [ 3500.416043] Code: 54fffec1 52800002 52800024 34000173 (8b224a80) 
    [ 3500.416592] ---[ end trace 0000000000000000 ]---
    [ 3500.417009] Kernel panic - not syncing: Oops: Fatal exception in interrupt
    [ 3500.417621] SMP: stopping secondary CPUs
    [ 3500.418110] Kernel Offset: disabled
    [ 3500.418425] CPU features: 0x1,00000208,3c020000,1000421b
    [ 3500.418903] Memory Limit: none
    [ 3500.419192] ---[ end Kernel panic - not syncing: Oops: Fatal exception in interrupt ]---

     

  2. @ebin-dev unfortunately I can't  :(

    I faced another problem with OMV6 and LUKS and ZFS 😕 . Everything was good until the reboot. After the reboot I don't know why, but Shared directories have missed device (probably UUID was changed... ) . 

    I tried to setup cryptsetup at the boot ,to encrypt my disks and then import ZFS pool ,but passphrase do not came out during a boot process (even I recompiled kernel with options found there:

    FInaly I'm using workaround: keyfiles at the /boot/  ,but even that sometimes I faced the problem that only 1 disk has been encripted only (so my zfs pool was incompleted 😕 ) . I added ExecStartPre=/usr/bin/sleep 30 , to [service] at /lib/systemd/system/zfs-import-cache.service  and it looks like now it is working.... (tried 3 times) .We will see .    

     

    Also the bad thing is , that I tried to back to my old configuration at emmc , but OMV5 also don't see devices at shared directories. It will work, after removing all of them and setup from the scratch . 

     

    I don't know if its any progress with performance ARM with native ZFS encription , but maybe I will force to setup it up..... instead of LUKS and ZFS.

  3. @usefulnoise yesterday I ran helios64 with Armbian 22.08.0 Bullseye (Debian) and kernel 5.15.59-rockchip64 

    I build image from the armbian sources. I tried to include zfs with image but i give up. 

    Finally I successfully install zfs-dkms from the backports repos

     

    apt -t bullseye-backports install zfs-dkms zfsutils-linux

     

    modules has been builded without any errors.

    Also I successfully install OMV 6 with ZFS plugin and LUKSEncryption.

     

    root@helios64:~# zfs --version
    zfs-2.1.5-1~bpo11+1
    zfs-kmod-2.1.5-1~bpo11+1
    root@helios64:~# uname -a
    Linux helios64 5.15.59-rockchip64 #trunk SMP PREEMPT Wed Aug 10 11:29:36 UTC 2022 aarch64 GNU/Linux
    root@helios64:~#


    right now I have 16h without the reboot.

    I leave system at defaults for the tests.

     

    root@helios64:~# cat /boot/armbianEnv.txt
    verbosity=1
    bootlogo=false
    overlay_prefix=rockchip
    rootdev=UUID=04ed9a40-6590-4491-8d58-0d647b8cc9db
    rootfstype=ext4
    usbstoragequirks=0x2537:0x1066:u,0x2537:0x1068:u
    root@helios64:~#

     

    root@helios64:~# cat /etc/default/cpufrequtils
    ENABLE="true"
    GOVERNOR="ondemand"
    MAX_SPEED="0"
    MIN_SPEED="0"
    root@helios64:~#

     

    Im in the  middle ZFS scrub . We will see if its complete without errors.

    Currently I saw one problem. Yesterday I ran screen with coping some data over NFS. Its failed with kernel error.

    Probably I will need to go back to my old cpufreq.  

     

    GOVERNOR=performance
    MAX_SPEED=1416000
    MIN_SPEED=1416000
     

     

     

    Aug 10 17:48:21 helios64 kernel: [ 3612.621208] FS-Cache: Loaded
    Aug 10 17:48:22 helios64 kernel: [ 3612.787472] FS-Cache: Netfs 'nfs' registered for caching
    Aug 10 17:59:51 helios64 kernel: [ 4301.850819] Modules linked in: nfsv3 nfs fscache netfs dm_crypt dm_mod r8152 snd_soc_hdmi_codec leds_pwm gpio_charger pwm_fan panfrost snd_soc_rock
    chip_i2s gpu_sched snd_soc_rockchip_pcm sg snd_soc_core snd_pcm_dmaengine snd_pcm rockchip_vdec(C) hantro_vpu(C) snd_timer rockchip_iep v4l2_h264 snd rockchip_rga videobuf2_dma_contig
     videobuf2_vmalloc v4l2_mem2mem videobuf2_dma_sg videobuf2_memops fusb302 soundcore videobuf2_v4l2 tcpm videobuf2_common typec videodev mc gpio_beeper cpufreq_dt zfs(POE) zunicode(POE
    ) zzstd(OE) nfsd zlua(OE) zcommon(POE) znvpair(POE) zavl(POE) auth_rpcgss icp(POE) nfs_acl lockd spl(OE) grace softdog ledtrig_netdev lm75 sunrpc ip_tables x_tables autofs4 raid10 rai
    d456 async_raid6_recov async_memcpy async_pq async_xor async_tx raid1 raid0 multipath linear md_mod realtek dwmac_rk stmmac_platform stmmac pcs_xpcs adc_keys
    Aug 10 17:59:51 helios64 kernel: [ 4301.857423] CPU: 5 PID: 79841 Comm: mc Tainted: P         C OE     5.15.59-rockchip64 #trunk
    Aug 10 17:59:51 helios64 kernel: [ 4301.858163] Hardware name: Helios64 (DT)
    Aug 10 17:59:51 helios64 kernel: [ 4301.858509] pstate: 40000005 (nZcv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
    Aug 10 17:59:51 helios64 kernel: [ 4301.859119] pc : nfs_generic_pg_pgios+0x60/0xc0 [nfs]
    Aug 10 17:59:51 helios64 kernel: [ 4301.859619] lr : nfs_generic_pg_pgios+0x5c/0xc0 [nfs]
    Aug 10 17:59:51 helios64 kernel: [ 4301.860096] sp : ffff80000e2b35c0
    Aug 10 17:59:51 helios64 kernel: [ 4301.860388] x29: ffff80000e2b35c0 x28: ffff00006c52d700 x27: ffff00006c52d700
    Aug 10 17:59:51 helios64 kernel: [ 4301.861018] x26: ffff80000e2b3880 x25: ffff80000954be68 x24: 0000000000000400
    Aug 10 17:59:51 helios64 kernel: [ 4301.861646] x23: 0000000000000000 x22: ffff00005c2f8780 x21: ffff8000019fd370
    Aug 10 17:59:51 helios64 kernel: [ 4301.862273] x20: ffff80000e2b3828 x19: ffff00009ff3a580 x18: 0000000000000001
    Aug 10 17:59:51 helios64 kernel: [ 4301.862901] x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000002
    Aug 10 17:59:51 helios64 kernel: [ 4301.863529] x14: 0000000000000080 x13: 000000000001b896 x12: ffff0000304cf9b8
    Aug 10 17:59:51 helios64 kernel: [ 4301.864157] x11: ffff00009ff3a418 x10: ffff00009ff3a488 x9 : 0000000000000000
    Aug 10 17:59:51 helios64 kernel: [ 4301.864784] x8 : ffff00009ff3a908 x7 : 0000000000000000 x6 : ffff00009ff3a590
    Aug 10 17:59:51 helios64 kernel: [ 4301.865411] x5 : 0000000000000096 x4 : ffff00009ff3a6f8 x3 : 0000000000000800
    Aug 10 17:59:51 helios64 kernel: [ 4301.866039] x2 : 50c2f8bdeb7ff200 x1 : 0000000000000000 x0 : 0000000000000000
    Aug 10 17:59:51 helios64 kernel: [ 4301.866667] Call trace:
    Aug 10 17:59:51 helios64 kernel: [ 4301.866883]  nfs_generic_pg_pgios+0x60/0xc0 [nfs]
    Aug 10 17:59:51 helios64 kernel: [ 4301.867331]  nfs_pageio_doio+0x48/0xa0 [nfs]
    Aug 10 17:59:51 helios64 kernel: [ 4301.867739]  __nfs_pageio_add_request+0x148/0x420 [nfs]
    Aug 10 17:59:51 helios64 kernel: [ 4301.868230]  nfs_pageio_add_request_mirror+0x40/0x60 [nfs]
    Aug 10 17:59:51 helios64 kernel: [ 4301.868743]  nfs_pageio_add_request+0x208/0x290 [nfs]
    Aug 10 17:59:51 helios64 kernel: [ 4301.869220]  readpage_async_filler+0x20c/0x400 [nfs]
    Aug 10 17:59:51 helios64 kernel: [ 4301.869688]  read_cache_pages+0xcc/0x198
    Aug 10 17:59:51 helios64 kernel: [ 4301.870039]  nfs_readpages+0x138/0x1b8 [nfs]
    Aug 10 17:59:51 helios64 kernel: [ 4301.870447]  read_pages+0x188/0x288
    Aug 10 17:59:51 helios64 kernel: [ 4301.870757]  page_cache_ra_unbounded+0x148/0x248
    Aug 10 17:59:51 helios64 kernel: [ 4301.871163]  do_page_cache_ra+0x40/0x50
    Aug 10 17:59:51 helios64 kernel: [ 4301.871501]  ondemand_readahead+0x128/0x2c8
    Aug 10 17:59:51 helios64 kernel: [ 4301.871870]  page_cache_async_ra+0xd4/0xd8
    Aug 10 17:59:51 helios64 kernel: [ 4301.872232]  filemap_get_pages+0x1fc/0x568
    Aug 10 17:59:51 helios64 kernel: [ 4301.872595]  filemap_read+0xac/0x340
    Aug 10 17:59:51 helios64 kernel: [ 4301.872911]  generic_file_read_iter+0xf4/0x188
    Aug 10 17:59:51 helios64 kernel: [ 4301.873304]  nfs_file_read+0x9c/0x128 [nfs]
    Aug 10 17:59:51 helios64 kernel: [ 4301.873706]  new_sync_read+0x104/0x180
    Aug 10 17:59:51 helios64 kernel: [ 4301.874040]  vfs_read+0x148/0x1e0
    Aug 10 17:59:51 helios64 kernel: [ 4301.874334]  ksys_read+0x68/0xf0
    Aug 10 17:59:51 helios64 kernel: [ 4301.874622]  __arm64_sys_read+0x1c/0x28
    Aug 10 17:59:51 helios64 kernel: [ 4301.874960]  invoke_syscall+0x44/0x108
    Aug 10 17:59:51 helios64 kernel: [ 4301.875293]  el0_svc_common.constprop.3+0x94/0xf8
    Aug 10 17:59:51 helios64 kernel: [ 4301.875709]  do_el0_svc+0x24/0x88
    Aug 10 17:59:51 helios64 kernel: [ 4301.876003]  el0_svc+0x20/0x50
    Aug 10 17:59:51 helios64 kernel: [ 4301.876275]  el0t_64_sync_handler+0x90/0xb8
    Aug 10 17:59:51 helios64 kernel: [ 4301.876645]  el0t_64_sync+0x180/0x184
    Aug 10 17:59:51 helios64 kernel: [ 4301.877507] ---[ end trace 970b42649c1e42e3 ]---

     

    Edit.

     

    After few hours , I saw the errors regarding my HDD.

    Aug 11 11:32:32 helios64 kernel: [67461.218137] ata1: hard resetting link
    Aug 11 11:32:32 helios64 kernel: [67461.694283] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
    Aug 11 11:32:32 helios64 kernel: [67461.696935] ata1.00: configured for UDMA/133
    Aug 11 11:32:32 helios64 kernel: [67461.697672] sd 0:0:0:0: [sda] tag#19 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=DRIVER_OK cmd_age=0s
    Aug 11 11:32:32 helios64 kernel: [67461.698535] sd 0:0:0:0: [sda] tag#19 Sense Key : 0x5 [current]
    Aug 11 11:32:32 helios64 kernel: [67461.699057] sd 0:0:0:0: [sda] tag#19 ASC=0x21 ASCQ=0x4
    Aug 11 11:32:32 helios64 kernel: [67461.699518] sd 0:0:0:0: [sda] tag#19 CDB: opcode=0x88 88 00 00 00 00 00 5c bd e5 50 00 00 08 00 00 00
    Aug 11 11:32:32 helios64 kernel: [67461.701293] zio pool=data vdev=/dev/mapper/sda-crypt error=5 type=1 offset=796629049344 size=1048576 flags=40080cb0
    Aug 11 11:32:32 helios64 kernel: [67461.702300] sd 0:0:0:0: [sda] tag#20 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=DRIVER_OK cmd_age=0s
    Aug 11 11:32:32 helios64 kernel: [67461.703147] sd 0:0:0:0: [sda] tag#20 Sense Key : 0x5 [current]
    Aug 11 11:32:32 helios64 kernel: [67461.703668] sd 0:0:0:0: [sda] tag#20 ASC=0x21 ASCQ=0x4
    Aug 11 11:32:32 helios64 kernel: [67461.704127] sd 0:0:0:0: [sda] tag#20 CDB: opcode=0x88 88 00 00 00 00 00 5c bd ed 50 00 00 08 00 00 00
    Aug 11 11:32:32 helios64 kernel: [67461.705890] zio pool=data vdev=/dev/mapper/sda-crypt error=5 type=1 offset=796630097920 size=1048576 flags=40080cb0
    Aug 11 11:32:32 helios64 kernel: [67461.706862] sd 0:0:0:0: [sda] tag#28 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=DRIVER_OK cmd_age=0s
    Aug 11 11:32:32 helios64 kernel: [67461.707709] sd 0:0:0:0: [sda] tag#28 Sense Key : 0x5 [current]
    Aug 11 11:32:32 helios64 kernel: [67461.708228] sd 0:0:0:0: [sda] tag#28 ASC=0x21 ASCQ=0x4
    Aug 11 11:32:32 helios64 kernel: [67461.708688] sd 0:0:0:0: [sda] tag#28 CDB: opcode=0x88 88 00 00 00 00 00 5c bd f5 50 00 00 08 00 00 00
    Aug 11 11:32:32 helios64 kernel: [67461.710466] zio pool=data vdev=/dev/mapper/sda-crypt error=5 type=1 offset=796631146496 size=1048576 flags=40080cb0
    Aug 11 11:32:32 helios64 kernel: [67461.711410] ata1: EH complete

     

     

    So I changed all my setting to the old one ( before the update) 

    GOVERNOR=performance
    MAX_SPEED=1416000
    MIN_SPEED=1416000

     

    and 

     

    root@helios64:~# cat /boot/armbianEnv.txt
    verbosity=7
    bootlogo=false
    console=serial
    extraargs=earlyprintk ignore_loglevel libata.force=3.0

     

    we will see ,whats next. 

    Rebooted system with those settings .

    ZFS scrub resumed.
     

     

     

  4. Hey guys.

    I have helios64 with 5x WD14tb red pro.... and ZFS fs. 

    but currently im out of free space :(

    I want to extend somehow my storage pool.

    I'm wonder if it's possible to buy additional storage enclosure like from QNAP ( TL-D800C) - JBOD 1 x USB 3.2 Gen 2 type C and put additional 8x HDD

    I wanted to create additional ZFS pool raidz1

    Do You think it will work? 

     

  5. @0utc45t for me helped to force sata 3.0

     

     

     

    Quote

    root@helios64:~# cat /boot/armbianEnv.txt 
    verbosity=7
    console=serial
    extraargs=earlyprintk ignore_loglevel libata.force=3.0
     

     

    Quote

    root@helios64:~# zpool status
      pool: data
     state: ONLINE
      scan: scrub repaired 0B in 06:10:03 with 0 errors on Sun May  9 06:34:04 2021
    config:

        NAME           STATE     READ WRITE CKSUM
        data           ONLINE       0     0     0
          mirror-0     ONLINE       0     0     0
            sda-crypt  ONLINE       0     0     0
            sdb-crypt  ONLINE       0     0     0
        cache
          sdc2         ONLINE       0     0     0

    errors: No known data errors
    root@helios64:~# uptime
     20:06:09 up 11 days,  5:26,  1 user,  load average: 0.09, 0.12, 0.14
    root@helios64:~# uname -a
    Linux helios64 5.10.35-rockchip64 #21.05.1 SMP PREEMPT Fri May 7 13:53:11 UTC 2021 aarch64 GNU/Linux
    root@helios64:~# 
     

     

  6. Thanks @ShadowDance for the tip, my scub test done without errors

     

    Quote

    root@helios64:~# zpool status -v data
      pool: data
     state: ONLINE
      scan: scrub repaired 0B in 07:47:07 with 0 errors on Thu Mar 25 03:55:37 2021
    config:

        NAME           STATE     READ WRITE CKSUM
        data           ONLINE       0     0     0
          mirror-0     ONLINE       0     0     0
            sda-crypt  ONLINE       0     0     0
            sdb-crypt  ONLINE       0     0     0
        cache
          sdc2         ONLINE       0     0     0
     

     

    Quote

    root@helios64:~# cat /boot/armbianEnv.txt 
    verbosity=7
    console=serial
    extraargs=earlyprintk ignore_loglevel libata.force=noncq
     

     

    Quote

    root@helios64:~# cat /etc/default/cpufrequtils 
    ENABLE="true"
    GOVERNOR=performance
    MAX_SPEED=1416000
    MIN_SPEED=1416000
     

     

     

    Quote

    root@helios64:~# cat /sys/block/sda/queue/scheduler 
    mq-deadline kyber [bfq] none
    root@helios64:~# cat /sys/block/sdb/queue/scheduler 
    mq-deadline kyber [bfq] none
    root@helios64:~# 
     

     

     

    Quote

    root@helios64:~# hdparm -I /dev/sda

    /dev/sda:

    ATA device, with non-removable media
        Model Number:       WDC WD40EFRX-68N32N0                    
        Firmware Revision:  82.00A82
     

    root@helios64:~# hdparm -I /dev/sdb

    /dev/sdb:

    ATA device, with non-removable media
        Model Number:       WDC WD40EFRX-68N32N0                    
        Firmware Revision:  82.00A82
     

     

  7. Today I manually ran zfs scrub at my zfs mirror pool,

    I got similar errors.

    The funny is, that normally ,scrub was initiated by the cron without any issue.

     

    Any suggestion what should I do? 

    I done: zpool clear data sda-crypt and zpool clear data sdb-crypt   ,at my pools and now I have online state, but I'm little afraid about my data ......

     

     

    output before today manual zfs scrub - Scrub without errors at Sun Mar 14......

     

    root@helios64:/# zpool status -v data
      pool: data
     state: ONLINE
      scan: scrub repaired 0B in 06:17:38 with 0 errors on Sun Mar 14 06:41:40 2021
    config:
    
            NAME           STATE     READ WRITE CKSUM
            data           ONLINE       0     0     0
              mirror-0     ONLINE       0     0     0
                sda-crypt  ONLINE       0     0     0
                sdb-crypt  ONLINE       0     0     0
            cache
              sdc2         ONLINE       0     0     0

     

     

     

    root@helios64:/var/log# cat /etc/cron.d/zfsutils-linux
    PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
    
    # TRIM the first Sunday of every month.
    24 0 1-7 * * root if [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/trim ]; then /usr/lib/zfs-linux/trim; fi
    
    # Scrub the second Sunday of every month.
    24 0 8-14 * * root if [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/scrub ]; then /usr/lib/zfs-linux/scrub; fi
    root@helios64:/var/log#

     

     

     

     

     

    Quote

    root@helios64:/var/log# zpool status -v data
      pool: data
     state: DEGRADED
    status: One or more devices has experienced an unrecoverable error.  An
            attempt was made to correct the error.  Applications are unaffected.
    action: Determine if the device needs to be replaced, and clear the errors
            using 'zpool clear' or replace the device with 'zpool replace'.
       see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
      scan: scrub repaired 4.87M in 06:32:31 with 0 errors on Tue Mar 23 15:21:50 2021
    config:

            NAME           STATE     READ WRITE CKSUM
            data           DEGRADED     0     0     0
              mirror-0     DEGRADED     0     0     0
                sda-crypt  DEGRADED     6     0    32  too many errors
                sdb-crypt  DEGRADED     9     0    29  too many errors
            cache
              sdc2         ONLINE       0     0     0
     

     

    Quote

     

    [ 8255.882611] ata2.00: irq_stat 0x08000000
    [ 8255.882960] ata2: SError: { Proto }
    [ 8255.883273] ata2.00: failed command: READ FPDMA QUEUED
    [ 8255.883732] ata2.00: cmd 60/00:10:18:4d:9d/08:00:15:00:00/40 tag 2 ncq dma 1048576 in
    [ 8255.883732]          res 40/00:10:18:4d:9d/00:00:15:00:00/40 Emask 0x2 (HSM violation)
    [ 8255.885101] ata2.00: status: { DRDY }
    [ 8255.885426] ata2.00: failed command: READ FPDMA QUEUED
    [ 8255.885885] ata2.00: cmd 60/00:18:90:5b:9d/08:00:15:00:00/40 tag 3 ncq dma 1048576 in
    [ 8255.885885]          res 40/00:10:18:4d:9d/00:00:15:00:00/40 Emask 0x2 (HSM violation)
    [ 8255.887294] ata2.00: status: { DRDY }
    [ 8255.887623] ata2.00: failed command: READ FPDMA QUEUED
    [ 8255.888083] ata2.00: cmd 60/10:98:18:55:9d/06:00:15:00:00/40 tag 19 ncq dma 794624 in
    [ 8255.888083]          res 40/00:10:18:4d:9d/00:00:15:00:00/40 Emask 0x2 (HSM violation)
    [ 8255.889450] ata2.00: status: { DRDY }
    [ 8255.889783] ata2: hard resetting link
    [ 8256.365961] ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
    [ 8256.367771] ata2.00: configured for UDMA/133
    [ 8256.368540] sd 1:0:0:0: [sdb] tag#2 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s
    [ 8256.369343] sd 1:0:0:0: [sdb] tag#2 Sense Key : 0x5 [current]
    [ 8256.369858] sd 1:0:0:0: [sdb] tag#2 ASC=0x21 ASCQ=0x4
    [ 8256.370354] sd 1:0:0:0: [sdb] tag#2 CDB: opcode=0x88 88 00 00 00 00 00 15 9d 4d 18 00 00 08 00 00 00
    [ 8256.371158] blk_update_request: I/O error, dev sdb, sector 362630424 op 0x0:(READ) flags 0x700 phys_seg 16 prio class 0
    [ 8256.372125] zio pool=data vdev=/dev/mapper/sdb-crypt error=5 type=1 offset=185649999872 size=1048576 flags=40080cb0
    [ 8256.373117] sd 1:0:0:0: [sdb] tag#3 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s
    [ 8256.373964] sd 1:0:0:0: [sdb] tag#3 Sense Key : 0x5 [current]
    [ 8256.374480] sd 1:0:0:0: [sdb] tag#3 ASC=0x21 ASCQ=0x4
    [ 8256.374940] sd 1:0:0:0: [sdb] tag#3 CDB: opcode=0x88 88 00 00 00 00 00 15 9d 5b 90 00 00 08 00 00 00
    [ 8256.375742] blk_update_request: I/O error, dev sdb, sector 362634128 op 0x0:(READ) flags 0x700 phys_seg 16 prio class 0
    [ 8256.376704] zio pool=data vdev=/dev/mapper/sdb-crypt error=5 type=1 offset=185651896320 size=1048576 flags=40080cb0
    [ 8256.377724] sd 1:0:0:0: [sdb] tag#19 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s
    [ 8256.378568] sd 1:0:0:0: [sdb] tag#19 Sense Key : 0x5 [current]
    [ 8256.379096] sd 1:0:0:0: [sdb] tag#19 ASC=0x21 ASCQ=0x4
    [ 8256.379561] sd 1:0:0:0: [sdb] tag#19 CDB: opcode=0x88 88 00 00 00 00 00 15 9d 55 18 00 00 06 10 00 00
    [ 8256.380371] blk_update_request: I/O error, dev sdb, sector 362632472 op 0x0:(READ) flags 0x700 phys_seg 13 prio class 0
    [ 8256.381336] zio pool=data vdev=/dev/mapper/sdb-crypt error=5 type=1 offset=185651048448 size=794624 flags=40080cb0
    [ 8256.382389] ata2: EH complete
    [15116.869369] ata2.00: exception Emask 0x2 SAct 0x30000200 SErr 0x400 action 0x6
    [15116.870017] ata2.00: irq_stat 0x08000000
    [15116.870366] ata2: SError: { Proto }
    [15116.870680] ata2.00: failed command: READ FPDMA QUEUED
    [15116.871140] ata2.00: cmd 60/00:48:98:41:1b/07:00:55:00:00/40 tag 9 ncq dma 917504 in
    [15116.871140]          res 40/00:e0:98:40:1b/00:00:55:00:00/40 Emask 0x2 (HSM violation)
    [15116.872501] ata2.00: status: { DRDY }
    [15116.872826] ata2.00: failed command: READ FPDMA QUEUED
    [15116.873284] ata2.00: cmd 60/00:e0:98:40:1b/01:00:55:00:00/40 tag 28 ncq dma 131072 in
    [15116.873284]          res 40/00:e0:98:40:1b/00:00:55:00:00/40 Emask 0x2 (HSM violation)
    [15116.874696] ata2.00: status: { DRDY }
    [15116.875023] ata2.00: failed command: READ FPDMA QUEUED
    [15116.875482] ata2.00: cmd 60/00:e8:98:48:1b/01:00:55:00:00/40 tag 29 ncq dma 131072 in
    [15116.875482]          res 40/00:e0:98:40:1b/00:00:55:00:00/40 Emask 0x2 (HSM violation)
    [15116.876850] ata2.00: status: { DRDY }
    [15116.877182] ata2: hard resetting link
    [15117.353348] ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
    [15117.355108] ata2.00: configured for UDMA/133
    [15117.355678] sd 1:0:0:0: [sdb] tag#9 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s
    [15117.356481] sd 1:0:0:0: [sdb] tag#9 Sense Key : 0x5 [current]
    [15117.356996] sd 1:0:0:0: [sdb] tag#9 ASC=0x21 ASCQ=0x4
    [15117.357492] sd 1:0:0:0: [sdb] tag#9 CDB: opcode=0x88 88 00 00 00 00 00 55 1b 41 98 00 00 07 00 00 00
    [15117.358296] blk_update_request: I/O error, dev sdb, sector 1427849624 op 0x0:(READ) flags 0x700 phys_seg 18 prio class 0
    [15117.359273] zio pool=data vdev=/dev/mapper/sdb-crypt error=5 type=1 offset=731042230272 size=917504 flags=40080cb0
    [15117.360297] sd 1:0:0:0: [sdb] tag#28 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s
    [15117.361152] sd 1:0:0:0: [sdb] tag#28 Sense Key : 0x5 [current]
    [15117.361716] sd 1:0:0:0: [sdb] tag#28 ASC=0x21 ASCQ=0x4
    [15117.362181] sd 1:0:0:0: [sdb] tag#28 CDB: opcode=0x88 88 00 00 00 00 00 55 1b 40 98 00 00 01 00 00 00
    [15117.362990] blk_update_request: I/O error, dev sdb, sector 1427849368 op 0x0:(READ) flags 0x700 phys_seg 2 prio class 0
    [15117.363944] zio pool=data vdev=/dev/mapper/sdb-crypt error=5 type=1 offset=731042099200 size=131072 flags=1808b0
    [15117.364883] sd 1:0:0:0: [sdb] tag#29 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s
    [15117.365719] sd 1:0:0:0: [sdb] tag#29 Sense Key : 0x5 [current]
    [15117.366243] sd 1:0:0:0: [sdb] tag#29 ASC=0x21 ASCQ=0x4
    [15117.366706] sd 1:0:0:0: [sdb] tag#29 CDB: opcode=0x88 88 00 00 00 00 00 55 1b 48 98 00 00 01 00 00 00
    [15117.367514] blk_update_request: I/O error, dev sdb, sector 1427851416 op 0x0:(READ) flags 0x700 phys_seg 2 prio class 0
    [15117.368466] zio pool=data vdev=/dev/mapper/sdb-crypt error=5 type=1 offset=731043147776 size=131072 flags=1808b0
    [15117.369450] ata2: EH complete
    [18959.250478] FS-Cache: Loaded
    [18959.292215] FS-Cache: Netfs 'nfs' registered for caching
    [18959.613399] NFS: Registering the id_resolver key type
    [18959.613885] Key type id_resolver registered
    [18959.614253] Key type id_legacy registered
    [19150.007895] ata1.00: exception Emask 0x2 SAct 0xe0000 SErr 0x400 action 0x6
    [19150.008525] ata1.00: irq_stat 0x08000000
    [19150.008874] ata1: SError: { Proto }
    [19150.009188] ata1.00: failed command: READ FPDMA QUEUED
    [19150.009648] ata1.00: cmd 60/78:88:30:40:05/00:00:78:00:00/40 tag 17 ncq dma 61440 in
    [19150.009648]          res 40/00:90:60:41:05/00:00:78:00:00/40 Emask 0x2 (HSM violation)
    [19150.011011] ata1.00: status: { DRDY }
    [19150.011341] ata1.00: failed command: READ FPDMA QUEUED
    [19150.011901] ata1.00: cmd 60/98:90:60:41:05/00:00:78:00:00/40 tag 18 ncq dma 77824 in
    [19150.011901]          res 40/00:90:60:41:05/00:00:78:00:00/40 Emask 0x2 (HSM violation)
    [19150.013357] ata1.00: status: { DRDY }
    [19150.013689] ata1.00: failed command: READ FPDMA QUEUED
    [19150.014148] ata1.00: cmd 60/60:98:88:42:05/00:00:78:00:00/40 tag 19 ncq dma 49152 in
    [19150.014148]          res 40/00:90:60:41:05/00:00:78:00:00/40 Emask 0x2 (HSM violation)
    [19150.015507] ata1.00: status: { DRDY }
    [19150.015892] ata1: hard resetting link
    [19150.491836] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
    [19150.493606] ata1.00: configured for UDMA/133
    [19150.494071] sd 0:0:0:0: [sda] tag#17 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s
    [19150.494880] sd 0:0:0:0: [sda] tag#17 Sense Key : 0x5 [current]
    [19150.495401] sd 0:0:0:0: [sda] tag#17 ASC=0x21 ASCQ=0x4
    [19150.495888] sd 0:0:0:0: [sda] tag#17 CDB: opcode=0x88 88 00 00 00 00 00 78 05 40 30 00 00 00 78 00 00
    [19150.496698] blk_update_request: I/O error, dev sda, sector 2013610032 op 0x0:(READ) flags 0x700 phys_seg 14 prio class 0
    [19150.497673] zio pool=data vdev=/dev/mapper/sda-crypt error=5 type=1 offset=1030951559168 size=61440 flags=40080cb0
    [19150.498675] sd 0:0:0:0: [sda] tag#18 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s
    [19150.499485] sd 0:0:0:0: [sda] tag#18 Sense Key : 0x5 [current]
    [19150.500044] sd 0:0:0:0: [sda] tag#18 ASC=0x21 ASCQ=0x4
    [19150.500511] sd 0:0:0:0: [sda] tag#18 CDB: opcode=0x88 88 00 00 00 00 00 78 05 41 60 00 00 00 98 00 00
    [19150.501321] blk_update_request: I/O error, dev sda, sector 2013610336 op 0x0:(READ) flags 0x700 phys_seg 17 prio class 0
    [19150.502290] zio pool=data vdev=/dev/mapper/sda-crypt error=5 type=1 offset=1030951714816 size=77824 flags=40080cb0
    [19150.503304] sd 0:0:0:0: [sda] tag#19 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s
    [19150.504141] sd 0:0:0:0: [sda] tag#19 Sense Key : 0x5 [current]
    [19150.504664] sd 0:0:0:0: [sda] tag#19 ASC=0x21 ASCQ=0x4
    [19150.505128] sd 0:0:0:0: [sda] tag#19 CDB: opcode=0x88 88 00 00 00 00 00 78 05 42 88 00 00 00 60 00 00
    [19150.505936] blk_update_request: I/O error, dev sda, sector 2013610632 op 0x0:(READ) flags 0x700 phys_seg 11 prio class 0
    [19150.506896] zio pool=data vdev=/dev/mapper/sda-crypt error=5 type=1 offset=1030951866368 size=49152 flags=40080cb0
    [19150.507875] ata1: EH complete
    [A[A[A[A[A[A[A[A[A[A[A[A[A[A[A[B[B[B[B[B[B[B[B[B[B[B[B[21308.943111] ata1.00: exception Emask 0x2 SAct 0x60010 SErr 0x400 action 0x6
    [21308.943736] ata1.00: irq_stat 0x08000000
    [21308.944086] ata1: SError: { Proto }
    [21308.944399] ata1.00: failed command: READ FPDMA QUEUED
    [21308.944859] ata1.00: cmd 60/00:20:70:11:6d/01:00:21:00:00/40 tag 4 ncq dma 131072 in
    [21308.944859]          res 40/00:88:70:0e:6d/00:00:21:00:00/40 Emask 0x2 (HSM violation)
    [21308.946219] ata1.00: status: { DRDY }
    [21308.946544] ata1.00: failed command: READ FPDMA QUEUED
    [21308.947003] ata1.00: cmd 60/00:88:70:0e:6d/02:00:21:00:00/40 tag 17 ncq dma 262144 in
    [21308.947003]          res 40/00:88:70:0e:6d/00:00:21:00:00/40 Emask 0x2 (HSM violation)
    [21308.948419] ata1.00: status: { DRDY }
    [21308.948747] ata1.00: failed command: READ FPDMA QUEUED
    [21308.949207] ata1.00: cmd 60/00:90:70:12:6d/01:00:21:00:00/40 tag 18 ncq dma 131072 in
    [21308.949207]          res 40/00:88:70:0e:6d/00:00:21:00:00/40 Emask 0x2 (HSM violation)
    [21308.950575] ata1.00: status: { DRDY }
    [21308.950909] ata1: hard resetting link
    [21309.427090] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
    [21309.428854] ata1.00: configured for UDMA/133
    [21309.429342] sd 0:0:0:0: [sda] tag#4 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s
    [21309.430145] sd 0:0:0:0: [sda] tag#4 Sense Key : 0x5 [current]
    [21309.430659] sd 0:0:0:0: [sda] tag#4 ASC=0x21 ASCQ=0x4
    [21309.431157] sd 0:0:0:0: [sda] tag#4 CDB: opcode=0x88 88 00 00 00 00 00 21 6d 11 70 00 00 01 00 00 00
    [21309.431964] blk_update_request: I/O error, dev sda, sector 560796016 op 0x0:(READ) flags 0x700 phys_seg 2 prio class 0
    [21309.432934] zio pool=data vdev=/dev/mapper/sda-crypt error=5 type=1 offset=287110782976 size=131072 flags=1808b0
    [21309.433947] sd 0:0:0:0: [sda] tag#17 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s
    [21309.434762] sd 0:0:0:0: [sda] tag#17 Sense Key : 0x5 [current]
    [21309.435329] sd 0:0:0:0: [sda] tag#17 ASC=0x21 ASCQ=0x4
    [21309.435799] sd 0:0:0:0: [sda] tag#17 CDB: opcode=0x88 88 00 00 00 00 00 21 6d 0e 70 00 00 02 00 00 00
    [21309.436609] blk_update_request: I/O error, dev sda, sector 560795248 op 0x0:(READ) flags 0x700 phys_seg 4 prio class 0
    [21309.437561] zio pool=data vdev=/dev/mapper/sda-crypt error=5 type=1 offset=287110389760 size=262144 flags=40080cb0
    [21309.438558] sd 0:0:0:0: [sda] tag#18 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s
    [21309.439397] sd 0:0:0:0: [sda] tag#18 Sense Key : 0x5 [current]
    [21309.439923] sd 0:0:0:0: [sda] tag#18 ASC=0x21 ASCQ=0x4
    [21309.440388] sd 0:0:0:0: [sda] tag#18 CDB: opcode=0x88 88 00 00 00 00 00 21 6d 12 70 00 00 01 00 00 00
    [21309.441198] blk_update_request: I/O error, dev sda, sector 560796272 op 0x0:(READ) flags 0x700 phys_seg 2 prio class 0
    [21309.442148] zio pool=data vdev=/dev/mapper/sda-crypt error=5 type=1 offset=287110914048 size=131072 flags=1808b0
    [21309.443166] ata1: EH complete
    [23343.877842] ata2.00: exception Emask 0x2 SAct 0x280002 SErr 0x400 action 0x6
    [23343.878472] ata2.00: irq_stat 0x08000000
    [23343.878821] ata2: SError: { Proto }
    [23343.879134] ata2.00: failed command: READ FPDMA QUEUED
    [23343.879594] ata2.00: cmd 60/00:08:10:c4:2e/01:00:6e:00:00/40 tag 1 ncq dma 131072 in
    [23343.879594]          res 40/00:08:10:c4:2e/00:00:6e:00:00/40 Emask 0x2 (HSM violation)
    [23343.880954] ata2.00: status: { DRDY }
    [23343.881279] ata2.00: failed command: READ FPDMA QUEUED
    [23343.881738] ata2.00: cmd 60/00:98:10:c6:2e/01:00:6e:00:00/40 tag 19 ncq dma 131072 in
    [23343.881738]          res 40/00:08:10:c4:2e/00:00:6e:00:00/40 Emask 0x2 (HSM violation)
    [23343.883162] ata2.00: status: { DRDY }
    [23343.883492] ata2.00: failed command: READ FPDMA QUEUED
    [23343.883953] ata2.00: cmd 60/00:a8:10:c5:2e/01:00:6e:00:00/40 tag 21 ncq dma 131072 in
    [23343.883953]          res 40/00:08:10:c4:2e/00:00:6e:00:00/40 Emask 0x2 (HSM violation)
    [23343.885321] ata2.00: status: { DRDY }
    [23343.885654] ata2: hard resetting link
    [23344.361826] ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
    [23344.363595] ata2.00: configured for UDMA/133
    [23344.364067] sd 1:0:0:0: [sdb] tag#1 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s
    [23344.364869] sd 1:0:0:0: [sdb] tag#1 Sense Key : 0x5 [current]
    [23344.365384] sd 1:0:0:0: [sdb] tag#1 ASC=0x21 ASCQ=0x4
    [23344.365882] sd 1:0:0:0: [sdb] tag#1 CDB: opcode=0x88 88 00 00 00 00 00 6e 2e c4 10 00 00 01 00 00 00
    [23344.366687] blk_update_request: I/O error, dev sdb, sector 1848558608 op 0x0:(READ) flags 0x700 phys_seg 2 prio class 0
    [23344.367660] zio pool=data vdev=/dev/mapper/sdb-crypt error=5 type=1 offset=946445230080 size=131072 flags=1808b0
    [23344.368660] sd 1:0:0:0: [sdb] tag#19 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s
    [23344.369472] sd 1:0:0:0: [sdb] tag#19 Sense Key : 0x5 [current]
    [23344.370040] sd 1:0:0:0: [sdb] tag#19 ASC=0x21 ASCQ=0x4
    [23344.370510] sd 1:0:0:0: [sdb] tag#19 CDB: opcode=0x88 88 00 00 00 00 00 6e 2e c6 10 00 00 01 00 00 00
    [23344.371320] blk_update_request: I/O error, dev sdb, sector 1848559120 op 0x0:(READ) flags 0x700 phys_seg 2 prio class 0
    [23344.372281] zio pool=data vdev=/dev/mapper/sdb-crypt error=5 type=1 offset=946445492224 size=131072 flags=1808b0
    [23344.373266] sd 1:0:0:0: [sdb] tag#21 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s
    [23344.374113] sd 1:0:0:0: [sdb] tag#21 Sense Key : 0x5 [current]
    [23344.374639] sd 1:0:0:0: [sdb] tag#21 ASC=0x21 ASCQ=0x4
    [23344.375108] sd 1:0:0:0: [sdb] tag#21 CDB: opcode=0x88 88 00 00 00 00 00 6e 2e c5 10 00 00 01 00 00 00
    [23344.375920] blk_update_request: I/O error, dev sdb, sector 1848558864 op 0x0:(READ) flags 0x700 phys_seg 2 prio class 0
    [23344.376877] zio pool=data vdev=/dev/mapper/sdb-crypt error=5 type=1 offset=946445361152 size=131072 flags=1808b0
    [23344.377899] ata2: EH complete
     

     

     

     

  8. I have tried to changed cpu governor to : ondemand

    root@helios64:/# cat /etc/default/cpufrequtils
    ENABLE="true"
    GOVERNOR=ondemand
    MAX_SPEED=1800000
    MIN_SPEED=408000

     

    but after 5min, my server got rebooted :( .

    Previously I used :

    cat /etc/default/cpufrequtils
    
    ENABLE="true"
    GOVERNOR=performance
    MAX_SPEED=1416000
    MIN_SPEED=1416000

     

    for last 3 kernel updates without any issue. Uptime's was mostly about 2-3 weeks without kernel panic or reboots

     

    Unfortunately I don't have nothing on my logs. I have changed verbose mode to 7 ,and Im logging to the file my console. I will update when I get error.

    I have  ZFS mirror ( LUKS on the top) + ZFS cache 

    Also I din't changed the disks scheduler. I have bfq , but it was worked since last upgrade without any problem. 

     

    currently I ran zfs scrub ... we will see. 

     

    [update 1] : 

    root@helios64:/# uptime
     11:02:40 up  2:28,  1 user,  load average: 4.80, 4.81, 4.60]

     

    scrub is still in progress ,but I got few read errors. I'm not sure if it problem with HDD connector, Hdd scheduler , or cpu freq....

    I didn't have it before.

     

     

    Quote

    root@helios64:/# zpool status -v data
      pool: data
     state: DEGRADED
    status: One or more devices has experienced an unrecoverable error.  An
            attempt was made to correct the error.  Applications are unaffected.
    action: Determine if the device needs to be replaced, and clear the errors
            using 'zpool clear' or replace the device with 'zpool replace'.
       see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
      scan: scrub in progress since Tue Mar 23 08:49:19 2021
            922G scanned at 126M/s, 640G issued at 87.5M/s, 2.11T total
            2.75M repaired, 29.62% done, 04:56:36 to go
    config:

            NAME           STATE     READ WRITE CKSUM
            data           DEGRADED     0     0     0
              mirror-0     DEGRADED     0     0     0
                sda-crypt  ONLINE       0     0     0
                sdb-crypt  DEGRADED     3     0    22  too many errors  (repairing)
            cache
              sdc2         ONLINE       0     0     0

     

     

    Quote

    [ 8255.882611] ata2.00: irq_stat 0x08000000
    [ 8255.882960] ata2: SError: { Proto }
    [ 8255.883273] ata2.00: failed command: READ FPDMA QUEUED
    [ 8255.883732] ata2.00: cmd 60/00:10:18:4d:9d/08:00:15:00:00/40 tag 2 ncq dma 1048576 in
    [ 8255.883732]          res 40/00:10:18:4d:9d/00:00:15:00:00/40 Emask 0x2 (HSM violation)
    [ 8255.885101] ata2.00: status: { DRDY }
    [ 8255.885426] ata2.00: failed command: READ FPDMA QUEUED
    [ 8255.885885] ata2.00: cmd 60/00:18:90:5b:9d/08:00:15:00:00/40 tag 3 ncq dma 1048576 in
    [ 8255.885885]          res 40/00:10:18:4d:9d/00:00:15:00:00/40 Emask 0x2 (HSM violation)
    [ 8255.887294] ata2.00: status: { DRDY }
    [ 8255.887623] ata2.00: failed command: READ FPDMA QUEUED
    [ 8255.888083] ata2.00: cmd 60/10:98:18:55:9d/06:00:15:00:00/40 tag 19 ncq dma 794624 in
    [ 8255.888083]          res 40/00:10:18:4d:9d/00:00:15:00:00/40 Emask 0x2 (HSM violation)
    [ 8255.889450] ata2.00: status: { DRDY }
    [ 8255.889783] ata2: hard resetting link
    [ 8256.365961] ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
    [ 8256.367771] ata2.00: configured for UDMA/133
    [ 8256.368540] sd 1:0:0:0: [sdb] tag#2 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s
    [ 8256.369343] sd 1:0:0:0: [sdb] tag#2 Sense Key : 0x5 [current]
    [ 8256.369858] sd 1:0:0:0: [sdb] tag#2 ASC=0x21 ASCQ=0x4
    [ 8256.370354] sd 1:0:0:0: [sdb] tag#2 CDB: opcode=0x88 88 00 00 00 00 00 15 9d 4d 18 00 00 08 00 00 00
    [ 8256.371158] blk_update_request: I/O error, dev sdb, sector 362630424 op 0x0:(READ) flags 0x700 phys_seg 16 prio class 0
    [ 8256.372125] zio pool=data vdev=/dev/mapper/sdb-crypt error=5 type=1 offset=185649999872 size=1048576 flags=40080cb0
    [ 8256.373117] sd 1:0:0:0: [sdb] tag#3 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s
    [ 8256.373964] sd 1:0:0:0: [sdb] tag#3 Sense Key : 0x5 [current]
    [ 8256.374480] sd 1:0:0:0: [sdb] tag#3 ASC=0x21 ASCQ=0x4
    [ 8256.374940] sd 1:0:0:0: [sdb] tag#3 CDB: opcode=0x88 88 00 00 00 00 00 15 9d 5b 90 00 00 08 00 00 00
    [ 8256.375742] blk_update_request: I/O error, dev sdb, sector 362634128 op 0x0:(READ) flags 0x700 phys_seg 16 prio class 0
    [ 8256.376704] zio pool=data vdev=/dev/mapper/sdb-crypt error=5 type=1 offset=185651896320 size=1048576 flags=40080cb0
    [ 8256.377724] sd 1:0:0:0: [sdb] tag#19 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s
    [ 8256.378568] sd 1:0:0:0: [sdb] tag#19 Sense Key : 0x5 [current]
    [ 8256.379096] sd 1:0:0:0: [sdb] tag#19 ASC=0x21 ASCQ=0x4
    [ 8256.379561] sd 1:0:0:0: [sdb] tag#19 CDB: opcode=0x88 88 00 00 00 00 00 15 9d 55 18 00 00 06 10 00 00
    [ 8256.380371] blk_update_request: I/O error, dev sdb, sector 362632472 op 0x0:(READ) flags 0x700 phys_seg 13 prio class 0
    [ 8256.381336] zio pool=data vdev=/dev/mapper/sdb-crypt error=5 type=1 offset=185651048448 size=794624 flags=40080cb0
    [ 8256.382389] ata2: EH complete
     

     

    [update 2]:

    Server has been rebooted ..... 

    last 'tasks' what i remember was generating thumbnails by nextcloud .

    Unfortunately nothing in the console (it was connected all the time ) with:

    root@helios64:~# cat /boot/armbianEnv.txt 
    verbosity=7
    console=serial
    extraargs=earlyprintk ignore_loglevel

     

    failed command: READ FPDMA QUEUED , errors comes from zfs scrum few hours ealier

     

     

    Quote

    [38857.198752] ata2.00: exception Emask 0x2 SAct 0x80600000 SErr 0x400 action 0x6
    [38857.199398] ata2.00: irq_stat 0x08000000
    [38857.199747] ata2: SError: { Proto }
    [38857.200059] ata2.00: failed command: READ FPDMA QUEUED
    [38857.200520] ata2.00: cmd 60/00:a8:d8:4e:5a/01:00:d9:00:00/40 tag 21 ncq dma 131072 in
    [38857.200520]          res 40/00:f8:d8:4b:5a/00:00:d9:00:00/40 Emask 0x2 (HSM violation)
    [38857.201887] ata2.00: status: { DRDY }
    [38857.202212] ata2.00: failed command: READ FPDMA QUEUED
    [38857.202671] ata2.00: cmd 60/00:b0:d8:4f:5a/01:00:d9:00:00/40 tag 22 ncq dma 131072 in
    [38857.202671]          res 40/00:f8:d8:4b:5a/00:00:d9:00:00/40 Emask 0x2 (HSM violation)
    [38857.204083] ata2.00: status: { DRDY }
    [38857.204411] ata2.00: failed command: READ FPDMA QUEUED
    [38857.204872] ata2.00: cmd 60/00:f8:d8:4b:5a/03:00:d9:00:00/40 tag 31 ncq dma 393216 in
    [38857.204872]          res 40/00:f8:d8:4b:5a/00:00:d9:00:00/40 Emask 0x2 (HSM violation)
    [38857.206239] ata2.00: status: { DRDY }
    [38857.206573] ata2: hard resetting link
    [38857.682727] ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
    [38857.684501] ata2.00: configured for UDMA/133
    [38857.685009] sd 1:0:0:0: [sdb] tag#21 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s
    [38857.685819] sd 1:0:0:0: [sdb] tag#21 Sense Key : 0x5 [current] 
    [38857.686340] sd 1:0:0:0: [sdb] tag#21 ASC=0x21 ASCQ=0x4 
    [38857.686847] sd 1:0:0:0: [sdb] tag#21 CDB: opcode=0x88 88 00 00 00 00 00 d9 5a 4e d8 00 00 01 00 00 00
    [38857.687660] blk_update_request: I/O error, dev sdb, sector 3646574296 op 0x0:(READ) flags 0x700 phys_seg 4 prio class 0
    [38857.688628] zio pool=data vdev=/dev/mapper/sdb-crypt error=5 type=1 offset=1867029262336 size=131072 flags=1808b0
    [38857.689672] sd 1:0:0:0: [sdb] tag#22 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s
    [38857.690487] sd 1:0:0:0: [sdb] tag#22 Sense Key : 0x5 [current] 
    [38857.691051] sd 1:0:0:0: [sdb] tag#22 ASC=0x21 ASCQ=0x4 
    [38857.691518] sd 1:0:0:0: [sdb] tag#22 CDB: opcode=0x88 88 00 00 00 00 00 d9 5a 4f d8 00 00 01 00 00 00
    [38857.692329] blk_update_request: I/O error, dev sdb, sector 3646574552 op 0x0:(READ) flags 0x700 phys_seg 4 prio class 0
    [38857.693288] zio pool=data vdev=/dev/mapper/sdb-crypt error=5 type=1 offset=1867029393408 size=131072 flags=1808b0
    [38857.694284] sd 1:0:0:0: [sdb] tag#31 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s
    [38857.695129] sd 1:0:0:0: [sdb] tag#31 Sense Key : 0x5 [current] 
    [38857.695652] sd 1:0:0:0: [sdb] tag#31 ASC=0x21 ASCQ=0x4 
    [38857.696116] sd 1:0:0:0: [sdb] tag#31 CDB: opcode=0x88 88 00 00 00 00 00 d9 5a 4b d8 00 00 03 00 00 00
    [38857.696927] blk_update_request: I/O error, dev sdb, sector 3646573528 op 0x0:(READ) flags 0x700 phys_seg 12 prio class 0
    [38857.697892] zio pool=data vdev=/dev/mapper/sdb-crypt error=5 type=1 offset=1867028869120 size=393216 flags=40080cb0
    [38857.698965] ata2: EH complete
    [38866.899025] [UFW BLOCK] IN=eth0 OUT= MAC=64:62:66:d0:01:fa:78:4f:43:7a:28:ae:08:00 SRC=192.168.18.177 DST=192.168.18.10 LEN=52 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=59353 DPT=443 WINDOW=2048 RES=0x00 ACK FIN URGP=0 
    [38873.508619] [UFW BLOCK] IN=eth0 OUT= MAC=64:62:66:d0:01:fa:78:4f:43:7a:28:ae:08:00 SRC=192.168.18.177 DST=192.168.18.10 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=59353 DPT=443 WINDOW=2048 RES=0x00 ACK RST URGP=0 
    DDR Version 1.24 20191016
    In
    soft reset
    SRX
    channel 0
    CS = 0
    MR0=0x18
    MR4=0x1
    MR5=0x1
    MR8=0x10
    MR12=0x72
    MR14=0x72
    MR18=0x0
    MR19=0x0
    MR24=0x8
    MR25=0x0
    channel 1
    CS = 0
    MR0=0x18
    MR4=0x1
    MR5=0x1
    MR8=0x10
    MR12=0x72
    MR14=0x72
    MR18=0x0
    MR19=0x0
    MR24=0x8
    MR25=0x0
    channel 0 training pass!
    channel 1 training pass!
    change freq to 416MHz 0,1
    Channel 0: LPDDR4,416MHz
    Bus Width=32 Col=10 Bank=8 Row=16 CS=1 Die Bus-Width=16 Size=2048MB
    Channel 1: LPDDR4,416MHz
    Bus Width=32 Col=10 Bank=8 Row=16 CS=1 Die Bus-Width=16 Size=2048MB
    256B stride
    channel 0
    CS = 0
    MR0=0x18
    MR4=0x82
    MR5=0x1
    MR8=0x10
    MR12=0x72
    MR14=0x72
    MR18=0x0
    MR19=0x0
    MR24=0x8
    MR25=0x0
    channel 1
    CS = 0
    MR0=0x18
    MR4=0x1
    MR5=0x1
    MR8=0x10
    MR12=0x72
    MR14=0x72
    MR18=0x0
    MR19=0x0
    MR24=0x8
    MR25=0x0
    channel 0 training pass!
    channel 1 training pass!
    channel 0, cs 0, advanced training done
    channel 1, cs 0, advanced training done
    change freq to 856MHz 1,0
    ch 0 ddrconfig = 0x101, ddrsize = 0x40
    ch 1 ddrconfig = 0x101, ddrsize = 0x40
    pmugrf_os_reg[2] = 0x32C1F2C1, stride = 0xD
    ddr_set_rate to 328MHZ
    ddr_set_rate to 666MHZ
    ddr_set_rate to 928MHZ
    channel 0, cs 0, advanced training done
    channel 1, cs 0, advanced training done
    ddr_set_rate to 416MHZ, ctl_index 0
    ddr_set_rate to 856MHZ, ctl_index 1
    support 416 856 328 666 928 MHz, current 856MHz
    OUT
    Boot1: 2019-03-14, version: 1.19
    CPUId = 0x0
    ChipType = 0x10, 322
    SdmmcInit=2 0
    BootCapSize=100000
    UserCapSize=14910MB
    FwPartOffset=2000 , 100000
    mmc0:cmd8,20
    mmc0:cmd5,20
    mmc0:cmd55,20
    mmc0:cmd1,20
    mmc0:cmd8,20
    mmc0:cmd5,20
    mmc0:cmd55,20
    mmc0:cmd1,20
    mmc0:cmd8,20
    mmc0:cmd5,20
    mmc0:cmd55,20
    mmc0:cmd1,20
    SdmmcInit=0 1
    StorageInit ok = 68359
    SecureMode = 0
    SecureInit read PBA: 0x4
    SecureInit read PBA: 0x404
    SecureInit read PBA: 0x804
    SecureInit read PBA: 0xc04
    SecureInit read PBA: 0x1004
    SecureInit read PBA: 0x1404
    SecureInit read PBA: 0x1804
    SecureInit read PBA: 0x1c04
    SecureInit ret = 0, SecureMode = 0
    atags_set_bootdev: ret:(0)
    GPT 0x3380ec0 signature is wrong
    recovery gpt...
    GPT 0x3380ec0 signature is wrong
    recovery gpt fail!
    LoadTrust Addr:0x4000
    No find bl30.bin
    No find bl32.bin
    Load uboot, ReadLba = 2000
    Load OK, addr=0x200000, size=0xdd6b0
    RunBL31 0x40000
    NOTICE:  BL31: v1.3(debug):42583b6
    NOTICE:  BL31: Built : 07:55:13, Oct 15 2019
    NOTICE:  BL31: Rockchip release version: v1.1
    INFO:    GICv3 with legacy support detected. ARM GICV3 driver initialized in EL3
    INFO:    Using opteed sec cpu_context!
    INFO:    boot cpu mask: 0
    INFO:    plat_rockchip_pmu_init(1190): pd status 3e
    INFO:    BL31: Initializing runtime services
    WARNING: No OPTEE provided by BL2 boot loader, Booting device without OPTEE initialization. SMC`s destined for OPTEE will return SMC_UNK
    ERROR:   Error initializing runtime service opteed_fast
    INFO:    BL31: Preparing for EL3 exit to normal world
    INFO:    Entry point address = 0x200000
    INFO:    SPSR = 0x3c9


    U-Boot 2020.07-armbian (Nov 23 2020 - 16:22:15 +0100)

    SoC: Rockchip rk3399
    Reset cause: WDOG
    DRAM:  3.9 GiB
    PMIC:  RK808 
    SF: Detected w25q128 with page size 256 Bytes, erase size 4 KiB, total 16 MiB
    MMC:   mmc@fe320000: 1, sdhci@fe330000: 0
    Loading Environment from MMC... *** Warning - bad CRC, using default environment

    In:    serial
    Out:   serial
    Err:   serial
    Model: Helios64
    Revision: 1.2 - 4GB non ECC
    Net:   eth0: ethernet@fe300000
    scanning bus for devices...
    Hit any key to stop autoboot:  0 
    Card did not respond to voltage select!
    switch to partitions #0, OK
    mmc0(part 0) is current device
    Scanning mmc 0:1...
    Found U-Boot script /boot/boot.scr
    3185 bytes read in 18 ms (171.9 KiB/s)
    ## Executing script at 00500000
    Boot script loaded from mmc 0
    219 bytes read in 15 ms (13.7 KiB/s)
    16209617 bytes read in 1558 ms (9.9 MiB/s)
    28582400 bytes read in 2729 ms (10 MiB/s)
    81913 bytes read in 44 ms (1.8 MiB/s)
    2698 bytes read in 34 ms (77.1 KiB/s)
    Applying kernel provided DT fixup script (rockchip-fixup.scr)
    ## Executing script at 09000000
    ## Loading init Ramdisk from Legacy Image at 06000000 ...
       Image Name:   uInitrd
       Image Type:   AArch64 Linux RAMDisk Image (gzip compressed)
       Data Size:    16209553 Bytes = 15.5 MiB
       Load Address: 00000000
       Entry Point:  00000000
       Verifying Checksum ... OK
    ## Flattened Device Tree blob at 01f00000
       Booting using the fdt blob at 0x1f00000
       Loading Ramdisk to f4f71000, end f5ee6691 ... OK
       Loading Device Tree to 00000000f4ef4000, end 00000000f4f70fff ... OK

    Starting kernel ...

    [    0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd034]
    [    0.000000] Linux version 5.10.21-rockchip64 (root@hirsute) (aarch64-linux-gnu-gcc (GNU Toolchain for the A-profile Architecture 8.3-2019.03 (arm-rel-8.36)) 8.3.0, GNU ld (GNU Toolchain for the A-profile Architecture 8.3-2019.03 (arm-rel-8.36)) 2.32.0.20190321) #21.02.3 SMP PREEMPT Mon Mar 8 01:05:08 UTC 2021
    [    0.000000] Machine model: Helios64
     


     

     

     

     

     

     

  9. Hey,

    I have Mirror ZFS, with LUKS encryption on the top + SSD Cache

    root@helios64:~# zpool status
      pool: data
     state: ONLINE
      scan: scrub repaired 0B in 05:27:58 with 0 errors on Sun Feb 14 20:06:49 2021
    config:
        NAME           STATE     READ WRITE CKSUM
        data           ONLINE       0     0     0
          mirror-0     ONLINE       0     0     0
            sda-crypt  ONLINE       0     0     0
            sdb-crypt  ONLINE       0     0     0
        cache
          sdc2         ONLINE       0     0     0

     

    Here You have my results.. it looks like it is faster to have LUKS at the top ...

     

    Quote

     fio --name=single-sequential-read --ioengine=posixaio --rw=read --bs=1m --size=16g --numjobs=1 --iodepth=1 --runtime=60 --time_based --end_fsync=1
    single-sequential-read: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=posixaio, iodepth=1
    fio-3.12
    Starting 1 process
    single-sequential-read: Laying out IO file (1 file / 16384MiB)
    Jobs: 1 (f=1): [R(1)][100.0%][r=122MiB/s][r=122 IOPS][eta 00m:00s]
    single-sequential-read: (groupid=0, jobs=1): err= 0: pid=9237: Sat Feb 20 10:13:39 2021
      read: IOPS=113, BW=114MiB/s (119MB/s)(6818MiB/60002msec)
        slat (usec): min=2, max=2613, avg=17.74, stdev=32.25
        clat (usec): min=1313, max=442050, avg=8762.93, stdev=15815.09
         lat (usec): min=1324, max=442069, avg=8780.67, stdev=15815.08
        clat percentiles (msec):
         |  1.00th=[    3],  5.00th=[    3], 10.00th=[    3], 20.00th=[    3],
         | 30.00th=[    3], 40.00th=[    3], 50.00th=[    4], 60.00th=[    7],
         | 70.00th=[   10], 80.00th=[   12], 90.00th=[   15], 95.00th=[   21],
         | 99.00th=[   77], 99.50th=[   95], 99.90th=[  228], 99.95th=[  257],
         | 99.99th=[  443]
       bw (  KiB/s): min=22528, max=147456, per=99.97%, avg=116320.82, stdev=25332.39, samples=120
       iops        : min=   22, max=  144, avg=113.51, stdev=24.79, samples=120
      lat (msec)   : 2=0.51%, 4=50.89%, 10=18.77%, 20=24.49%, 50=3.15%
      lat (msec)   : 100=1.77%, 250=0.32%, 500=0.07%
      cpu          : usr=0.55%, sys=0.28%, ctx=6854, majf=7, minf=74
      IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         issued rwts: total=6818,0,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=1

    Run status group 0 (all jobs):
       READ: bw=114MiB/s (119MB/s), 114MiB/s-114MiB/s (119MB/s-119MB/s), io=6818MiB (7149MB), run=60002-60002msec
     

     

  10. Thx @FloBaoti 

    I had to remove openmediavault-zfs first....

     

    Quote

    apt remove openmediavault-zfs

    apt remove dkms zfs-dkms

    apt install dkms zfs-dkms

    apt install openmediavault-zfs

     

     

    root@helios64:/# zfs --version
    zfs-2.0.1-1
    zfs-kmod-2.0.1-1
    root@helios64:/# uname -a
    Linux helios64 5.10.12-rockchip64 #21.02.1 SMP PREEMPT Wed Feb 3 20:55:02 CET 2021 aarch64 GNU/Linux
    root@helios64:/#
     

    Now i have working ZFS again ;)

  11. It looks like the latest update Linux helios64 5.10.12-rockchip64 #21.02.1  broke ZFS dkms 

     

    After the reboot zfs kernel module is missing...

    Probably I have to build it manually again :(

     

    Quote

    root@helios64:/# uname -a
    Linux helios64 5.10.12-rockchip64 #21.02.1 SMP PREEMPT Wed Feb 3 20:55:02 CET 2021 aarch64 GNU/Linux
    root@helios64:/# zfs --version
    zfs-2.0.1-1
    zfs_version_kernel() failed: No such file or directory
     

     

     

    Quote

    root@helios64:/# apt upgrade
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    Calculating upgrade... Done
    The following package was automatically installed and is no longer required:
      dkms
    Use 'apt autoremove' to remove it.
    The following packages will be upgraded:
      armbian-config armbian-firmware base-files device-tree-compiler file iproute2 libcairo2 libgnutls30 libmagic-mgc libmagic1 libnss-myhostname libpam-systemd libsystemd0
      libudev1 linux-buster-root-current-helios64 linux-dtb-current-rockchip64 linux-headers-current-rockchip64 linux-image-current-rockchip64 linux-u-boot-helios64-current
      systemd systemd-sysv udev unzip
    23 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
    Need to get 68.4 MB of archives.
    After this operation, 8992 kB disk space will be freed.
    Do you want to continue? [Y/n] y
    Get:1 http://deb.debian.org/debian buster/main arm64 base-files arm64 10.3+deb10u8 [69.9 kB]
    Get:2 http://deb.debian.org/debian buster/main arm64 systemd-sysv arm64 241-7~deb10u6 [100 kB]
    Get:3 http://deb.debian.org/debian buster/main arm64 libpam-systemd arm64 241-7~deb10u6 [201 kB]
    Get:4 http://deb.debian.org/debian buster/main arm64 libsystemd0 arm64 241-7~deb10u6 [314 kB]
    Get:5 http://deb.debian.org/debian buster/main arm64 libnss-myhostname arm64 241-7~deb10u6 [122 kB]
    Get:6 http://deb.debian.org/debian buster/main arm64 systemd arm64 241-7~deb10u6 [3258 kB]
    Get:7 http://deb.debian.org/debian buster/main arm64 udev arm64 241-7~deb10u6 [1245 kB]
    Get:8 http://deb.debian.org/debian buster/main arm64 libudev1 arm64 241-7~deb10u6 [146 kB]
    Get:9 http://deb.debian.org/debian buster/main arm64 libgnutls30 arm64 3.6.7-4+deb10u6 [1062 kB]
    Get:10 http://deb.debian.org/debian buster/main arm64 iproute2 arm64 4.20.0-2+deb10u1 [805 kB]
    Get:11 http://deb.debian.org/debian buster/main arm64 file arm64 1:5.35-4+deb10u2 [66.5 kB]
    Get:12 http://deb.debian.org/debian buster/main arm64 libmagic1 arm64 1:5.35-4+deb10u2 [115 kB]
    Get:13 http://deb.debian.org/debian buster/main arm64 libmagic-mgc arm64 1:5.35-4+deb10u2 [242 kB]
    Get:14 http://deb.debian.org/debian buster/main arm64 unzip arm64 6.0-23+deb10u2 [166 kB]
    Get:15 http://deb.debian.org/debian buster/main arm64 device-tree-compiler arm64 1.4.7-4 [251 kB]
    Get:16 http://deb.debian.org/debian buster/main arm64 libcairo2 arm64 1.16.0-4+deb10u1 [644 kB]
    Get:17 https://imola.armbian.com/apt buster/main arm64 armbian-config all 21.02.1 [44.8 kB]
    Get:18 https://mirrors.netix.net/armbian/apt buster/main arm64 armbian-firmware all 21.02.1 [6922 kB]
    Get:19 https://mirrors.dotsrc.org/armbian-apt buster/main arm64 linux-buster-root-current-helios64 arm64 21.02.1 [416 kB]
    Get:21 https://imola.armbian.com/apt buster/main arm64 linux-headers-current-rockchip64 arm64 21.02.1 [11.5 MB]
    Get:20 https://mirrors.dotsrc.org/armbian-apt buster/main arm64 linux-dtb-current-rockchip64 arm64 21.02.1 [311 kB]
    Get:23 https://mirrors.dotsrc.org/armbian-apt buster/main arm64 linux-u-boot-helios64-current arm64 21.02.1 [478 kB]
    Get:22 https://mirrors.netix.net/armbian/apt buster/main arm64 linux-image-current-rockchip64 arm64 21.02.1 [39.9 MB]
    Fetched 68.4 MB in 5s (13.7 MB/s)
    Preconfiguring packages ...
    (Reading database ... 102529 files and directories currently installed.)
    Preparing to unpack .../base-files_10.3+deb10u8_arm64.deb ...
    Unpacking base-files (10.3+deb10u8) over (10.3+deb10u7) ...
    Setting up base-files (10.3+deb10u8) ...
    Installing new version of config file /etc/debian_version ...
    (Reading database ... 102529 files and directories currently installed.)
    Preparing to unpack .../systemd-sysv_241-7~deb10u6_arm64.deb ...
    Unpacking systemd-sysv (241-7~deb10u6) over (241-7~deb10u5) ...
    Preparing to unpack .../libpam-systemd_241-7~deb10u6_arm64.deb ...
    Unpacking libpam-systemd:arm64 (241-7~deb10u6) over (241-7~deb10u5) ...
    Preparing to unpack .../libsystemd0_241-7~deb10u6_arm64.deb ...
    Unpacking libsystemd0:arm64 (241-7~deb10u6) over (241-7~deb10u5) ...
    Setting up libsystemd0:arm64 (241-7~deb10u6) ...
    (Reading database ... 102529 files and directories currently installed.)
    Preparing to unpack .../libnss-myhostname_241-7~deb10u6_arm64.deb ...
    Unpacking libnss-myhostname:arm64 (241-7~deb10u6) over (241-7~deb10u5) ...
    Preparing to unpack .../systemd_241-7~deb10u6_arm64.deb ...
    Unpacking systemd (241-7~deb10u6) over (241-7~deb10u5) ...
    Preparing to unpack .../udev_241-7~deb10u6_arm64.deb ...
    Unpacking udev (241-7~deb10u6) over (241-7~deb10u5) ...
    Preparing to unpack .../libudev1_241-7~deb10u6_arm64.deb ...
    Unpacking libudev1:arm64 (241-7~deb10u6) over (241-7~deb10u5) ...
    Setting up libudev1:arm64 (241-7~deb10u6) ...
    (Reading database ... 102529 files and directories currently installed.)
    Preparing to unpack .../libgnutls30_3.6.7-4+deb10u6_arm64.deb ...
    Unpacking libgnutls30:arm64 (3.6.7-4+deb10u6) over (3.6.7-4+deb10u5) ...
    Setting up libgnutls30:arm64 (3.6.7-4+deb10u6) ...
    (Reading database ... 102529 files and directories currently installed.)
    Preparing to unpack .../00-iproute2_4.20.0-2+deb10u1_arm64.deb ...
    Unpacking iproute2 (4.20.0-2+deb10u1) over (4.20.0-2) ...
    Preparing to unpack .../01-file_1%3a5.35-4+deb10u2_arm64.deb ...
    Unpacking file (1:5.35-4+deb10u2) over (1:5.35-4+deb10u1) ...
    Preparing to unpack .../02-libmagic1_1%3a5.35-4+deb10u2_arm64.deb ...
    Unpacking libmagic1:arm64 (1:5.35-4+deb10u2) over (1:5.35-4+deb10u1) ...
    Preparing to unpack .../03-libmagic-mgc_1%3a5.35-4+deb10u2_arm64.deb ...
    Unpacking libmagic-mgc (1:5.35-4+deb10u2) over (1:5.35-4+deb10u1) ...
    Preparing to unpack .../04-unzip_6.0-23+deb10u2_arm64.deb ...
    Unpacking unzip (6.0-23+deb10u2) over (6.0-23+deb10u1) ...
    Preparing to unpack .../05-armbian-config_21.02.1_all.deb ...
    Unpacking armbian-config (21.02.1) over (20.11.6) ...
    Preparing to unpack .../06-armbian-firmware_21.02.1_all.deb ...
    Unpacking armbian-firmware (21.02.1) over (20.11.3) ...
    Preparing to unpack .../07-device-tree-compiler_1.4.7-4_arm64.deb ...
    Unpacking device-tree-compiler (1.4.7-4) over (1.4.7-3) ...
    Preparing to unpack .../08-libcairo2_1.16.0-4+deb10u1_arm64.deb ...
    Unpacking libcairo2:arm64 (1.16.0-4+deb10u1) over (1.16.0-4) ...
    Preparing to unpack .../09-linux-buster-root-current-helios64_21.02.1_arm64.deb ...
    Unpacking linux-buster-root-current-helios64 (21.02.1) over (20.11.6) ...
    Preparing to unpack .../10-linux-dtb-current-rockchip64_21.02.1_arm64.deb ...
    Unpacking linux-dtb-current-rockchip64 (21.02.1) over (20.11.4) ...
    Preparing to unpack .../11-linux-headers-current-rockchip64_21.02.1_arm64.deb ...
    Unpacking linux-headers-current-rockchip64 (21.02.1) over (20.11.4) ...
    Preparing to unpack .../12-linux-image-current-rockchip64_21.02.1_arm64.deb ...
    dkms: removing: zfs 2.0.1 (5.9.14-rockchip64) (aarch64)

    -------- Uninstall Beginning --------
    Module:  zfs
    Version: 2.0.1
    Kernel:  5.9.14-rockchip64 (aarch64)
    -------------------------------------

    Status: Before uninstall, this module version was ACTIVE on this kernel.

    zavl.ko:
     - Uninstallation
       - Deleting from: /lib/modules/5.9.14-rockchip64/
    rmdir: failed to remove '': No such file or directory
     - Original module
       - No original module was found for this module on this kernel.
       - Use the dkms install command to reinstall any previous module version.


    znvpair.ko:
     - Uninstallation
       - Deleting from: /lib/modules/5.9.14-rockchip64/
    rmdir: failed to remove '': No such file or directory
     - Original module
       - No original module was found for this module on this kernel.
       - Use the dkms install command to reinstall any previous module version.


    zunicode.ko:
     - Uninstallation
       - Deleting from: /lib/modules/5.9.14-rockchip64/
    rmdir: failed to remove '': No such file or directory
     - Original module
       - No original module was found for this module on this kernel.
       - Use the dkms install command to reinstall any previous module version.


    zcommon.ko:
     - Uninstallation
       - Deleting from: /lib/modules/5.9.14-rockchip64/updates/dkms/
    rmdir: failed to remove 'updates/dkms': Directory not empty
     - Original module
       - No original module was found for this module on this kernel.
       - Use the dkms install command to reinstall any previous module version.


    zfs.ko:
     - Uninstallation
       - Deleting from: /lib/modules/5.9.14-rockchip64/updates/dkms/
    rmdir: failed to remove 'updates/dkms': Directory not empty
     - Original module
       - No original module was found for this module on this kernel.
       - Use the dkms install command to reinstall any previous module version.


    icp.ko:
     - Uninstallation
       - Deleting from: /lib/modules/5.9.14-rockchip64/
    rmdir: failed to remove '': No such file or directory
     - Original module
       - No original module was found for this module on this kernel.
       - Use the dkms install command to reinstall any previous module version.


    zlua.ko:
     - Uninstallation
       - Deleting from: /lib/modules/5.9.14-rockchip64/updates/dkms/
     - Original module
       - No original module was found for this module on this kernel.
       - Use the dkms install command to reinstall any previous module version.


    spl.ko:
     - Uninstallation
       - Deleting from: /lib/modules/5.9.14-rockchip64/
    rmdir: failed to remove '': No such file or directory
     - Original module
       - No original module was found for this module on this kernel.
       - Use the dkms install command to reinstall any previous module version.


    zzstd.ko:
     - Uninstallation
       - Deleting from: /lib/modules/5.9.14-rockchip64/
    rmdir: failed to remove '': No such file or directory
     - Original module
       - No original module was found for this module on this kernel.
       - Use the dkms install command to reinstall any previous module version.

    depmod....

    DKMS: uninstall completed.
    update-initramfs: Deleting /boot/initrd.img-5.9.14-rockchip64
    Removing obsolete file uInitrd-5.9.14-rockchip64
    Unpacking linux-image-current-rockchip64 (21.02.1) over (20.11.4) ...
    Preparing to unpack .../13-linux-u-boot-helios64-current_21.02.1_arm64.deb ...
    Unpacking linux-u-boot-helios64-current (21.02.1) over (20.11.6) ...
    Setting up linux-u-boot-helios64-current (21.02.1) ...
    Setting up libmagic-mgc (1:5.35-4+deb10u2) ...
    Setting up iproute2 (4.20.0-2+deb10u1) ...
    Setting up unzip (6.0-23+deb10u2) ...
    Setting up linux-image-current-rockchip64 (21.02.1) ...
    configure: error:
            *** Unable to build an empty module.

    Error! Bad return status for module build on kernel: 5.10.12-rockchip64 (aarch64)
    Consult /var/lib/dkms/zfs/2.0.1/build/make.log for more information.
    update-initramfs: Generating /boot/initrd.img-5.10.12-rockchip64
    cryptsetup: WARNING: The initramfs image may not contain cryptsetup binaries
        nor crypto modules. If that's on purpose, you may want to uninstall the
        'cryptsetup-initramfs' package in order to disable the cryptsetup initramfs
        integration and avoid this warning.
    update-initramfs: Converting to u-boot format
    Setting up libmagic1:arm64 (1:5.35-4+deb10u2) ...
    Setting up linux-buster-root-current-helios64 (21.02.1) ...
    Setting up systemd (241-7~deb10u6) ...
    Setting up file (1:5.35-4+deb10u2) ...
    Setting up libcairo2:arm64 (1.16.0-4+deb10u1) ...
    Setting up armbian-config (21.02.1) ...
    Setting up linux-dtb-current-rockchip64 (21.02.1) ...
    Setting up udev (241-7~deb10u6) ...
    update-initramfs: deferring update (trigger activated)
    Setting up libnss-myhostname:arm64 (241-7~deb10u6) ...
    Setting up device-tree-compiler (1.4.7-4) ...
    Setting up armbian-firmware (21.02.1) ...
    Setting up linux-headers-current-rockchip64 (21.02.1) ...
    Compiling headers - please wait ...
    Setting up systemd-sysv (241-7~deb10u6) ...
    Setting up libpam-systemd:arm64 (241-7~deb10u6) ...
    Processing triggers for initramfs-tools (0.133+deb10u1) ...
    update-initramfs: Generating /boot/initrd.img-5.10.12-rockchip64
    cryptsetup: WARNING: The initramfs image may not contain cryptsetup binaries
        nor crypto modules. If that's on purpose, you may want to uninstall the
        'cryptsetup-initramfs' package in order to disable the cryptsetup initramfs
        integration and avoid this warning.
    update-initramfs: Converting to u-boot format
    Processing triggers for libc-bin (2.28-10) ...
    Processing triggers for man-db (2.8.5-2) ...
    Processing triggers for cracklib-runtime (2.9.6-2) ...
    Processing triggers for dbus (1.12.20-0+deb10u1) ...
    Processing triggers for mime-support (3.62) ...
     

     

  12. @tionebrr I have built it from this script yesterday without any issue.

    I only changed a line with make deb to make deb-kmod to create only kmod package (it was needed for me after kernel upgrade)

     

    cd "/scratch/$zfsver-$(uname -r)"
    sh autogen.sh
    ./configure
    make -s -j$(nproc)
    make deb-kmod
    mkdir "/scratch/deb-$zfsver-$(uname -r)"
    cp *.deb "/scratch/deb-$zfsver-$(uname -r)"
    rm -rf  "/scratch/$zfsver-$(uname -r)"
    exit
    EOF


    If you have a trouble pls pm me, i can send you this deb package.  (mine is: kmod-zfs-5.9.14-rockchip64_2.0.0-1_arm64.deb )  , I have changed zfsver  git to 2.0.0 instead of zfsver="zfs-2.0.0-rc6"

     

    Also i hashed all cleanup section, and i installed ZFS from official repo and im using only kmod installed manually. In other way I had a problem with OMV :/

    root@helios64:~# zfs version
    zfs-0.8.5-3~bpo10+1
    zfs-kmod-2.0.0-1
    root@helios64:~# 

     

    root@helios64:~# dpkg -l |grep zfs
    ii  kmod-zfs-5.9.10-rockchip64           2.0.0-0                             arm64        zfs kernel module(s) for 5.9.10-rockchip64
    ii  kmod-zfs-5.9.11-rockchip64           2.0.0-1                             arm64        zfs kernel module(s) for 5.9.11-rockchip64
    ii  kmod-zfs-5.9.14-rockchip64           2.0.0-1                             arm64        zfs kernel module(s) for 5.9.14-rockchip64
    ii  libzfs2linux                         0.8.5-3~bpo10+1                     arm64        OpenZFS filesystem library for Linux
    ii  openmediavault-zfs                   5.0.5                               arm64        OpenMediaVault plugin for ZFS
    ii  zfs-auto-snapshot                    1.2.4-2                             all          ZFS automatic snapshot service
    iF  zfs-dkms                             0.8.5-3~bpo10+1                     all          OpenZFS filesystem kernel modules for Linux
    ii  zfs-dkms-dummy                       0.8.4                               all          Dummy zfs-dkms package for when using built kmod
    ii  zfs-zed                              0.8.5-3~bpo10+1                     arm64        OpenZFS Event Daemon
    ii  zfsutils-linux                       0.8.5-3~bpo10+1                     arm64        command-line tools to manage OpenZFS filesystems
    root@helios64:~# 

     

     

    @ShadowDance

    I forgot to mention that on the top i have full encryption with LUKS. like You :)

    yesterday i upgraded OS to Armbian 20.11.3 and kernel to Linux helios64 5.9.14-rockchip64

    I posted what version I have. I wanted to have ZFS with OMV and this solution is somehow working for me. If i used all packages from manual build i wasn't force omv zfs plugin to work :/ 

    My swap is off

    I will change something in the settings if I will face the problem again. currently i finished copy my data and everything is working as desired. 

     

     

     

  13. to install OMV You should login via SSH to your helios then:

     

    root@helios64:~# armbian-config

    root@helios64:~# armbian-config
    
    Software -> Softy -> OMV

     

    You can verify if it installed:

     

    root@helios64:~# dpkg -l |grep openmediavault
    ii  openmediavault                       5.5.17-3                            all          openmediavault - The open network attached storage solution
    ii  openmediavault-fail2ban              5.0.5                               all          OpenMediaVault Fail2ban plugin
    ii  openmediavault-flashmemory           5.0.7                               all          folder2ram plugin for OpenMediaVault
    ii  openmediavault-keyring               1.0                                 all          GnuPG archive keys of the OpenMediaVault archive
    ii  openmediavault-luksencryption        5.0.2                               all          OpenMediaVault LUKS encryption plugin
    ii  openmediavault-omvextrasorg          5.4.2                               all          OMV-Extras.org Package Repositories for OpenMediaVault
    ii  openmediavault-zfs                   5.0.5                               arm64        OpenMediaVault plugin for ZFS
    root@helios64:~# 
    

     

  14. Hey,

    Do You using compression on Your ZFS datastore ? 

    Yesterday I wanted to do some cleanup (Previously i have only pool created and all directories were there) ,I created new datastore for each 'directory' like home, docker-data etc. 

    Also I setup them with ACL support and compression:

    zfs set aclinherit=passthrough data/docker-data
    zfs set acltype=posixacl data/docker-data
    zfs set xattr=sa data/docker-data
    zfs set compression=lz4 data/docker-data


    I successfully moved 3 directories (about 300 GB each) from main 'data' datastore to 'data'/something . 

    The load was quite big and write speed about 40 - 55 MB/s . ( ZFS Mirror - 2x 4TB hdd ) 

    When I tried with another (directory to new datastore) I got kernel panic. I faced this problem twice yesterday. Finally I disabled compression on the rest datastories 

     

    kernel:  5.9.11 

     

    Dec 14 22:46:36 localhost kernel: [  469.592940] Modules linked in: quota_v2 quota_tree dm_crypt algif_skcipher af_alg dm_mod softdog governor_performance r8152 snd_soc_hdmi_codec pwm_fan gpio_charger leds_pwm snd_soc_rockchip_i2s snd_soc_core rockchip_vdec(C) snd_pcm_dmaengine snd_pcm hantro_vpu(C) rockchip_rga v4l2_h264 snd_timer fusb302 videobuf2_dma_contig rockchipdrm snd videobuf2_dma_sg dw_mipi_dsi v4l2_mem2mem videobuf2_vmalloc dw_hdmi soundcore videobuf2_memops tcpm analogix_dp panfrost videobuf2_v4l2 gpu_sched typec videobuf2_common drm_kms_helper cec videodev rc_core mc sg drm drm_panel_orientation_quirks cpufreq_dt gpio_beeper zfs(POE) zunicode(POE) zzstd(OE) zlua(OE) nfsd auth_rpcgss nfs_acl lockd grace zcommon(POE) znvpair(POE) zavl(POE) icp(POE) spl(OE) sunrpc lm75 ip_tables x_tables autofs4 raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx raid1 raid0 multipath linear md_mod realtek dwmac_rk stmmac_platform stmmac mdio_xpcs adc_keys
    Dec 14 22:46:36 localhost kernel: [  469.600200] CPU: 5 PID: 5769 Comm: z_wr_iss Tainted: P         C OE     5.9.11-rockchip64 #20.11.1
    Dec 14 22:46:36 localhost kernel: [  469.600982] Hardware name: Helios64 (DT)
    Dec 14 22:46:36 localhost kernel: [  469.601328] pstate: 20000005 (nzCv daif -PAN -UAO BTYPE=--)
    Dec 14 22:46:36 localhost kernel: [  469.601942] pc : lz4_compress_zfs+0xfc/0x7d0 [zfs]
    Dec 14 22:46:36 localhost kernel: [  469.602367] lr : 0x11be8
    Dec 14 22:46:36 localhost kernel: [  469.602591] sp : ffff80001434bbd0
    Dec 14 22:46:36 localhost kernel: [  469.602882] x29: ffff80001434bbd0 x28: ffff0000af99cf80 
    Dec 14 22:46:36 localhost kernel: [  469.603349] x27: 00000000000000ff x26: ffff000064adaca8 
    Dec 14 22:46:36 localhost kernel: [  469.603815] x25: ffff800022378be8 x24: ffff800012855004 
    Dec 14 22:46:36 localhost kernel: [  469.604280] x23: ffff0000a04c4000 x22: ffff8000095c0000 
    Dec 14 22:46:36 localhost kernel: [  469.604746] x21: ffff800012855000 x20: 0000000000020000 
    Dec 14 22:46:36 localhost kernel: [  469.605212] x19: ffff800022367000 x18: 000000005b5eaa7e 
    Dec 14 22:46:36 localhost kernel: [  469.605677] x17: 00000000fffffff0 x16: ffff800022386ff8 
    Dec 14 22:46:36 localhost kernel: [  469.606143] x15: ffff800022386ffa x14: ffff800022386ffb 
    Dec 14 22:46:36 localhost kernel: [  469.606609] x13: 00000000ffffffff x12: ffffffffffff0001 
    Dec 14 22:46:36 localhost kernel: [  469.607075] x11: ffff800012867a81 x10: 000000009e3779b1 
    Dec 14 22:46:36 localhost kernel: [  469.607540] x9 : ffff80002236a2f2 x8 : ffff80002237a2f9 
    Dec 14 22:46:36 localhost kernel: [  469.608006] x7 : ffff800012871000 x6 : ffff800022387000 
    Dec 14 22:46:36 localhost kernel: [  469.608472] x5 : ffff800022386ff4 x4 : ffff80002237a301 
    Dec 14 22:46:36 localhost kernel: [  469.608937] x3 : 0000000000000239 x2 : ffff80002237a2f1 
    Dec 14 22:46:36 localhost kernel: [  469.609402] x1 : ffff800022379a3b x0 : 00000000000005b5 
    Dec 14 22:46:36 localhost kernel: [  469.609870] Call trace:
    Dec 14 22:46:36 localhost kernel: [  469.610234]  lz4_compress_zfs+0xfc/0x7d0 [zfs]
    Dec 14 22:46:36 localhost kernel: [  469.610737]  zio_compress_data+0xc0/0x13c [zfs]
    Dec 14 22:46:36 localhost kernel: [  469.611243]  zio_write_compress+0x294/0x6d0 [zfs]
    Dec 14 22:46:36 localhost kernel: [  469.611764]  zio_execute+0xa4/0x144 [zfs]
    Dec 14 22:46:36 localhost kernel: [  469.612137]  taskq_thread+0x278/0x450 [spl]
    Dec 14 22:46:36 localhost kernel: [  469.612517]  kthread+0x118/0x150
    Dec 14 22:46:36 localhost kernel: [  469.612804]  ret_from_fork+0x10/0x34
    Dec 14 22:46:36 localhost kernel: [  469.613660] ---[ end trace 34e3e71ff3a20382 ]---

     

  15. Did You tried to jump armbian 20.11 and kernel 5.9. 10. ?

    Currently Im using 5.9.10 begin friday without any issues. Before it 5.9.9? and i successfully copied more than 2 TB data. (zfs - raid1)

    Also i ran docker and compiled few times ZFS from sources . The load was big but i don't see any issues. 

     

  16. Hey,

    I think a dedicated topic for ZFS on Helios 64 is needed. Many people want to have it :)

     

    As we know, and many of us playing with dev builds for helios ,it is no easy to work with ZFS 

    I wrote few scripts maybe someone of You it can help in someway. 

    Thanks @jbergler and @ShadowDance , I used your idea for complete it.

    The problem is with playing with OMV and ZFS plugin. then dependencies remove our latest version of ZFS. I tested. everything survived but ZFS is downgraded to latest version from repo 

    for example:

     

    root@helios64:~# zfs --version

    zfs-0.8.4-2~bpo10+1

    zfs-kmod-2.0.0-rc6

    root@helios64:~# uname -a

    Linux helios64 5.9.11-rockchip64 #trunk.1 SMP PREEMPT Thu Nov 26 01:32:45 CET 2020 aarch64 GNU/Linux

    root@helios64:~#

     

    I tested it with kernel 5.9.10 and with today clean install with 5.9.11

     

    First we need to have docker installed ( armbian-config -> software -> softy -> docker )

     

    Create dedicated directory with Dockerfile ( we will customize ubuntu:bionic image with required libraries and gcc 10 . in next builds we can skip this step and just customize and run build-zfs.sh script) 

    mkdir zfs-builder

    cd zfs-builder

    vi Dockerfile

     

    FROM ubuntu:bionic
    RUN apt update; apt install build-essential autoconf automake bison flex libtool gawk alien fakeroot dkms libblkid-dev uuid-dev libudev-dev libssl-dev zlib1g-dev libaio-dev libattr1-dev libelf-dev python3 python3-dev python3-setuptools python3-cffi libffi-dev -y; apt install software-properties-common -y; add-apt-repository ppa:ubuntu-toolchain-r/test; apt install gcc-10 g++-10 -y; update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-10 10; update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-10 10

     

    Build docker image for building purposes.

     

    docker build --tag zfs-build-ubuntu-bionic:0.1 .

     

    Create script for ZFS build

     

    vi build-zfs.sh

     

    #!/bin/bash
    
    #define zfs version
    zfsver="zfs-2.0.0-rc6"
    #creating building directory
    mkdir /tmp/zfs-builds && cd "$_"
    rm -rf /tmp/zfs-builds/inside_zfs.sh
    apt-get download linux-headers-current-rockchip64
    git clone -b $zfsver https://github.com/openzfs/zfs.git $zfsver-$(uname -r)
    
    #create file to execute inside container
    echo "creating file to execute it inside container"
    cat > /tmp/zfs-builds/inside_zfs.sh  <<EOF
    #!/bin/bash
    cd scratch/
    dpkg -i linux-headers-current-*.deb
    zfsver="zfs-2.0.0-rc6"
    
    cd "/scratch/$zfsver-$(uname -r)"
    sh autogen.sh
    ./configure
    make -s -j$(nproc)
    make deb
    mkdir "/scratch/deb-$zfsver-$(uname -r)"
    cp *.deb "/scratch/deb-$zfsver-$(uname -r)"
    rm -rf  "/scratch/$zfsver-$(uname -r)"
    exit
    EOF
    
    chmod +x /tmp/zfs-builds/inside_zfs.sh
    
    echo ""
    echo "####################"
    echo "starting container.."
    echo "####################"
    echo ""
    docker run --rm -it -v /tmp/zfs-builds:/scratch zfs-build-ubuntu-bionic:0.1 /bin/bash /scratch/inside_zfs.sh  
    
    # Cleanup packages (if installed).
    modprobe -r zfs zunicode zzstd zlua zcommon znvpair zavl icp spl
    apt remove --yes zfsutils-linux zfs-zed zfs-initramfs
    apt autoremove --yes
    dpkg -i "/tmp/zfs-builds/deb-$zfsver-$(uname -r)"/kmod-zfs-$(uname -r)*.deb
    dpkg -i "/tmp/zfs-builds/deb-$zfsver-$(uname -r)"/{libnvpair1,libuutil1,libzfs2,libzpool2,python3-pyzfs,zfs}_*.deb 
    
    echo ""
    echo "###################"
    echo "building complete"
    echo "###################"
    echo ""
    

     

    chmod +x build-zfs.sh

    screen -L -Logfile buildlog.txt ./build-zfs-final.sh

     

     

     

     

     

  17. Hey I was able to build ZFS zfs-2.0.0-rc6 from sources using @jbergler instructions on my helios64 . debian (inside docker ubuntu container) 

     

    root@helios64:~# uname -a

    Linux helios64 5.9.10-rockchip64 #trunk.39 SMP PREEMPT Sun Nov 22 10:36:59 CET 2020 aarch64 GNU/Linux

    root@helios64:~#

     

    but now im stuck with old glibc version.

    root@helios64:~/zfs_deb# zfs
    zfs: /lib/aarch64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /lib/aarch64-linux-gnu/libzfs.so.4)
    root@helios64:~/zfs_deb# zfs --version
    zfs: /lib/aarch64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /lib/aarch64-linux-gnu/libzfs.so.4)
    root@helios64:~/zfs_deb#
    
    root@helios64:~/zfs_deb# ldd --version
    ldd (Debian GLIBC 2.28-10) 2.28
    Copyright (C) 2018 Free Software Foundation, Inc.
    This is free software; see the source for copying conditions.  There is NO
    warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
    Written by Roland McGrath and Ulrich Drepper.

     

    I also installed kmod-zfs instead of zfs-dkms_2.0.0-0_arm64.deb package. 

    When I tried to install zfs-dkms i got errors: 

     

    Setting up libnvpair1 (2.0.0-0) ...
    Setting up libuutil1 (2.0.0-0) ...
    Setting up libzfs2-devel (2.0.0-0) ...
    Setting up libzfs2 (2.0.0-0) ...
    Setting up libzpool2 (2.0.0-0) ...
    Setting up python3-pyzfs (2.0.0-0) ...
    Setting up zfs-dkms (2.0.0-0) ...
    WARNING: /usr/lib/dkms/common.postinst does not exist.
    ERROR: DKMS version is too old and zfs was not
    built with legacy DKMS support.
    You must either rebuild zfs with legacy postinst
    support or upgrade DKMS to a more current version.
    dpkg: error processing package zfs-dkms (--install):
    installed zfs-dkms package post-installation script subprocess returned error exit status 1
    Setting up zfs-dracut (2.0.0-0) ...
    Setting up zfs-initramfs (2.0.0-0) ...
    Setting up zfs-test (2.0.0-0) ...
    Setting up zfs (2.0.0-0) ...
    Processing triggers for libc-bin (2.28-10) ...
    Processing triggers for systemd (241-7~deb10u4) ...
    Processing triggers for man-db (2.8.5-2) ...
    Errors were encountered while processing:
    zfs-dkms

     

×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines