Jump to content

grek

Members
  • Posts

    19
  • Joined

  • Last visited

Everything posted by grek

  1. I installed Armbian_23.5.4_Helios64_bookworm_current_6.1.36 and kernel helios64 6.6.8-edge-rockchip64. Generally it is ok. First from long time I do not need to limit my sata to 3gbit , and I using cpu governor : on demand with min: 408000 max: 1200000 . ,instead of performance 1416000 My configuration its little specific because Im using LUKS crypt on top of my ZFS mirror (2x 4TB) + 1 cache SSD + 2 hdd EXT4 encrypted with ZFS. I tested NFS yesterday , and I successfully downloaded ~ 700GB without any errors . only what is not working as expected, that running sbc-bench.sh , causing kernel panic . ( http://sprunge.us/3VLJNX ) I will still test it, but it good point to start. [ 3500.389590] Internal error: Oops: 0000000086000004 [#1] PREEMPT SMP [ 3500.390166] Modules linked in: governor_powersave governor_performance dm_crypt trusted asn1_encoder nfsv3 nfs fscache netfs rfkill lz4hc lz4 zram binfmt_misc zfs(PO) spl(O) snd_soc_hdmi_codec snd_soc_rockchip_i2s crct10dif_ce rockchip_vdec(C) hantro_vpu snd_soc_core rockchip_rga v4l2_vp9 leds_pwm snd_compress v4l2_h264 ac97_bus videobuf2_dma_contig v4l2_mem2mem snd_pcm_dmaengine videobuf2_dma_sg snd_pcm pwm_fan panfrost gpu_sched gpio_charger drm_shmem_helper videobuf2_memops videobuf2_v4l2 snd_timer rk_crypto videodev snd videobuf2_common nvmem_rockchip_efuse mc soundcore gpio_beeper ledtrig_netdev nfsd lm75 auth_rpcgss nfs_acl lockd grace sunrpc ip_tables x_tables autofs4 efivarfs raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx raid1 raid0 multipath linear cdc_ncm cdc_ether usbnet r8152 rockchipdrm realtek dw_mipi_dsi dw_hdmi analogix_dp fusb302 drm_display_helper tcpm cec typec dwmac_rk stmmac_platform drm_dma_helper drm_kms_helper stmmac drm adc_keys pcs_xpcs [ 3500.398227] CPU: 4 PID: 0 Comm: swapper/4 Tainted: P C O 6.6.8-edge-rockchip64 #1 [ 3500.399007] Hardware name: Helios64 (DT) [ 3500.399362] pstate: 000000c5 (nzcv daIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--) [ 3500.399988] pc : psi_group_change+0xc8/0x358 [ 3500.400388] lr : psi_task_change+0x8c/0xbc [ 3500.400767] sp : ffff80008203bd60 [ 3500.401069] x29: ffff80008203bd70 x28: ffff800075e65000 x27: ffff800081cac8e8 [ 3500.401720] x26: ffff800081909d40 x25: 0000000000000000 x24: ffff800081907008 [ 3500.402368] x23: 0000000000000001 x22: 0000000000000000 x21: 0000000000000000 [ 3500.403015] x20: ffff0000f776ed40 x19: 0000000000000004 x18: 0000000000000000 [ 3500.403662] x17: ffff800075e65000 x16: ffff800082038000 x15: 0000000000000001 [ 3500.404309] x14: 0000000000000000 x13: ffff800081581d50 x12: ffff800081c89b68 [ 3500.404957] x11: 0000000000000040 x10: 0000000000000004 x9 : 0000032eff793e13 [ 3500.405605] x8 : ffff800081c8dd28 x7 : 0000000000000000 x6 : 000000001c036006 [ 3500.406252] x5 : 0000000000000001 x4 : 0000000000000001 x3 : 0000000000000001 [ 3500.406898] x2 : 0000000000000001 x1 : 0000000000000004 x0 : ffff0000f776ed40 [ 3500.407545] Call trace: [ 3500.407771] psi_group_change+0xc8/0x358 [ 3500.408131] psi_task_change+0x8c/0xbc [ 3500.408479] activate_task+0xcc/0x128 [ 3500.408818] ttwu_do_activate+0x58/0x274 [ 3500.409180] sched_ttwu_pending+0xe4/0x1c0 [ 3500.409557] __flush_smp_call_function_queue+0x14c/0x458 [ 3500.410042] generic_smp_call_function_single_interrupt+0x14/0x20 [ 3500.410594] ipi_handler+0x84/0x250 [ 3500.410920] handle_percpu_devid_irq+0xa4/0x230 [ 3500.411337] generic_handle_domain_irq+0x2c/0x44 [ 3500.411756] gic_handle_irq+0x50/0x128 [ 3500.412097] call_on_irq_stack+0x24/0x4c [ 3500.412458] do_interrupt_handler+0xd4/0xd8 [ 3500.412842] el1_interrupt+0x34/0x68 [ 3500.413173] el1h_64_irq_handler+0x18/0x24 [ 3500.413548] el1h_64_irq+0x64/0x68 [ 3500.413860] cpuidle_enter_state+0xc0/0x4bc [ 3500.414246] cpuidle_enter+0x38/0x50 [ 3500.414577] do_idle+0x1fc/0x270 [ 3500.414878] cpu_startup_entry+0x38/0x3c [ 3500.415238] secondary_start_kernel+0x128/0x148 [ 3500.415653] __secondary_switched+0xb8/0xbc [ 3500.416043] Code: 54fffec1 52800002 52800024 34000173 (8b224a80) [ 3500.416592] ---[ end trace 0000000000000000 ]--- [ 3500.417009] Kernel panic - not syncing: Oops: Fatal exception in interrupt [ 3500.417621] SMP: stopping secondary CPUs [ 3500.418110] Kernel Offset: disabled [ 3500.418425] CPU features: 0x1,00000208,3c020000,1000421b [ 3500.418903] Memory Limit: none [ 3500.419192] ---[ end Kernel panic - not syncing: Oops: Fatal exception in interrupt ]---
  2. Finally I do a rollback to my old config. armbian 21.08.8 with kernel: 5.10.63-rockchip64 I got reboots during NFS copy to helios .So during about 1 month I got 3 reboots.
  3. @ebin-dev unfortunately I can't I faced another problem with OMV6 and LUKS and ZFS 😕 . Everything was good until the reboot. After the reboot I don't know why, but Shared directories have missed device (probably UUID was changed... ) . I tried to setup cryptsetup at the boot ,to encrypt my disks and then import ZFS pool ,but passphrase do not came out during a boot process (even I recompiled kernel with options found there: FInaly I'm using workaround: keyfiles at the /boot/ ,but even that sometimes I faced the problem that only 1 disk has been encripted only (so my zfs pool was incompleted 😕 ) . I added ExecStartPre=/usr/bin/sleep 30 , to [service] at /lib/systemd/system/zfs-import-cache.service and it looks like now it is working.... (tried 3 times) .We will see . Also the bad thing is , that I tried to back to my old configuration at emmc , but OMV5 also don't see devices at shared directories. It will work, after removing all of them and setup from the scratch . I don't know if its any progress with performance ARM with native ZFS encription , but maybe I will force to setup it up..... instead of LUKS and ZFS.
  4. @usefulnoise yesterday I ran helios64 with Armbian 22.08.0 Bullseye (Debian) and kernel 5.15.59-rockchip64 I build image from the armbian sources. I tried to include zfs with image but i give up. Finally I successfully install zfs-dkms from the backports repos apt -t bullseye-backports install zfs-dkms zfsutils-linux modules has been builded without any errors. Also I successfully install OMV 6 with ZFS plugin and LUKSEncryption. root@helios64:~# zfs --version zfs-2.1.5-1~bpo11+1 zfs-kmod-2.1.5-1~bpo11+1 root@helios64:~# uname -a Linux helios64 5.15.59-rockchip64 #trunk SMP PREEMPT Wed Aug 10 11:29:36 UTC 2022 aarch64 GNU/Linux root@helios64:~# right now I have 16h without the reboot. I leave system at defaults for the tests. root@helios64:~# cat /boot/armbianEnv.txt verbosity=1 bootlogo=false overlay_prefix=rockchip rootdev=UUID=04ed9a40-6590-4491-8d58-0d647b8cc9db rootfstype=ext4 usbstoragequirks=0x2537:0x1066:u,0x2537:0x1068:u root@helios64:~# root@helios64:~# cat /etc/default/cpufrequtils ENABLE="true" GOVERNOR="ondemand" MAX_SPEED="0" MIN_SPEED="0" root@helios64:~# Im in the middle ZFS scrub . We will see if its complete without errors. Currently I saw one problem. Yesterday I ran screen with coping some data over NFS. Its failed with kernel error. Probably I will need to go back to my old cpufreq. GOVERNOR=performance MAX_SPEED=1416000 MIN_SPEED=1416000 Aug 10 17:48:21 helios64 kernel: [ 3612.621208] FS-Cache: Loaded Aug 10 17:48:22 helios64 kernel: [ 3612.787472] FS-Cache: Netfs 'nfs' registered for caching Aug 10 17:59:51 helios64 kernel: [ 4301.850819] Modules linked in: nfsv3 nfs fscache netfs dm_crypt dm_mod r8152 snd_soc_hdmi_codec leds_pwm gpio_charger pwm_fan panfrost snd_soc_rock chip_i2s gpu_sched snd_soc_rockchip_pcm sg snd_soc_core snd_pcm_dmaengine snd_pcm rockchip_vdec(C) hantro_vpu(C) snd_timer rockchip_iep v4l2_h264 snd rockchip_rga videobuf2_dma_contig videobuf2_vmalloc v4l2_mem2mem videobuf2_dma_sg videobuf2_memops fusb302 soundcore videobuf2_v4l2 tcpm videobuf2_common typec videodev mc gpio_beeper cpufreq_dt zfs(POE) zunicode(POE ) zzstd(OE) nfsd zlua(OE) zcommon(POE) znvpair(POE) zavl(POE) auth_rpcgss icp(POE) nfs_acl lockd spl(OE) grace softdog ledtrig_netdev lm75 sunrpc ip_tables x_tables autofs4 raid10 rai d456 async_raid6_recov async_memcpy async_pq async_xor async_tx raid1 raid0 multipath linear md_mod realtek dwmac_rk stmmac_platform stmmac pcs_xpcs adc_keys Aug 10 17:59:51 helios64 kernel: [ 4301.857423] CPU: 5 PID: 79841 Comm: mc Tainted: P C OE 5.15.59-rockchip64 #trunk Aug 10 17:59:51 helios64 kernel: [ 4301.858163] Hardware name: Helios64 (DT) Aug 10 17:59:51 helios64 kernel: [ 4301.858509] pstate: 40000005 (nZcv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--) Aug 10 17:59:51 helios64 kernel: [ 4301.859119] pc : nfs_generic_pg_pgios+0x60/0xc0 [nfs] Aug 10 17:59:51 helios64 kernel: [ 4301.859619] lr : nfs_generic_pg_pgios+0x5c/0xc0 [nfs] Aug 10 17:59:51 helios64 kernel: [ 4301.860096] sp : ffff80000e2b35c0 Aug 10 17:59:51 helios64 kernel: [ 4301.860388] x29: ffff80000e2b35c0 x28: ffff00006c52d700 x27: ffff00006c52d700 Aug 10 17:59:51 helios64 kernel: [ 4301.861018] x26: ffff80000e2b3880 x25: ffff80000954be68 x24: 0000000000000400 Aug 10 17:59:51 helios64 kernel: [ 4301.861646] x23: 0000000000000000 x22: ffff00005c2f8780 x21: ffff8000019fd370 Aug 10 17:59:51 helios64 kernel: [ 4301.862273] x20: ffff80000e2b3828 x19: ffff00009ff3a580 x18: 0000000000000001 Aug 10 17:59:51 helios64 kernel: [ 4301.862901] x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000002 Aug 10 17:59:51 helios64 kernel: [ 4301.863529] x14: 0000000000000080 x13: 000000000001b896 x12: ffff0000304cf9b8 Aug 10 17:59:51 helios64 kernel: [ 4301.864157] x11: ffff00009ff3a418 x10: ffff00009ff3a488 x9 : 0000000000000000 Aug 10 17:59:51 helios64 kernel: [ 4301.864784] x8 : ffff00009ff3a908 x7 : 0000000000000000 x6 : ffff00009ff3a590 Aug 10 17:59:51 helios64 kernel: [ 4301.865411] x5 : 0000000000000096 x4 : ffff00009ff3a6f8 x3 : 0000000000000800 Aug 10 17:59:51 helios64 kernel: [ 4301.866039] x2 : 50c2f8bdeb7ff200 x1 : 0000000000000000 x0 : 0000000000000000 Aug 10 17:59:51 helios64 kernel: [ 4301.866667] Call trace: Aug 10 17:59:51 helios64 kernel: [ 4301.866883] nfs_generic_pg_pgios+0x60/0xc0 [nfs] Aug 10 17:59:51 helios64 kernel: [ 4301.867331] nfs_pageio_doio+0x48/0xa0 [nfs] Aug 10 17:59:51 helios64 kernel: [ 4301.867739] __nfs_pageio_add_request+0x148/0x420 [nfs] Aug 10 17:59:51 helios64 kernel: [ 4301.868230] nfs_pageio_add_request_mirror+0x40/0x60 [nfs] Aug 10 17:59:51 helios64 kernel: [ 4301.868743] nfs_pageio_add_request+0x208/0x290 [nfs] Aug 10 17:59:51 helios64 kernel: [ 4301.869220] readpage_async_filler+0x20c/0x400 [nfs] Aug 10 17:59:51 helios64 kernel: [ 4301.869688] read_cache_pages+0xcc/0x198 Aug 10 17:59:51 helios64 kernel: [ 4301.870039] nfs_readpages+0x138/0x1b8 [nfs] Aug 10 17:59:51 helios64 kernel: [ 4301.870447] read_pages+0x188/0x288 Aug 10 17:59:51 helios64 kernel: [ 4301.870757] page_cache_ra_unbounded+0x148/0x248 Aug 10 17:59:51 helios64 kernel: [ 4301.871163] do_page_cache_ra+0x40/0x50 Aug 10 17:59:51 helios64 kernel: [ 4301.871501] ondemand_readahead+0x128/0x2c8 Aug 10 17:59:51 helios64 kernel: [ 4301.871870] page_cache_async_ra+0xd4/0xd8 Aug 10 17:59:51 helios64 kernel: [ 4301.872232] filemap_get_pages+0x1fc/0x568 Aug 10 17:59:51 helios64 kernel: [ 4301.872595] filemap_read+0xac/0x340 Aug 10 17:59:51 helios64 kernel: [ 4301.872911] generic_file_read_iter+0xf4/0x188 Aug 10 17:59:51 helios64 kernel: [ 4301.873304] nfs_file_read+0x9c/0x128 [nfs] Aug 10 17:59:51 helios64 kernel: [ 4301.873706] new_sync_read+0x104/0x180 Aug 10 17:59:51 helios64 kernel: [ 4301.874040] vfs_read+0x148/0x1e0 Aug 10 17:59:51 helios64 kernel: [ 4301.874334] ksys_read+0x68/0xf0 Aug 10 17:59:51 helios64 kernel: [ 4301.874622] __arm64_sys_read+0x1c/0x28 Aug 10 17:59:51 helios64 kernel: [ 4301.874960] invoke_syscall+0x44/0x108 Aug 10 17:59:51 helios64 kernel: [ 4301.875293] el0_svc_common.constprop.3+0x94/0xf8 Aug 10 17:59:51 helios64 kernel: [ 4301.875709] do_el0_svc+0x24/0x88 Aug 10 17:59:51 helios64 kernel: [ 4301.876003] el0_svc+0x20/0x50 Aug 10 17:59:51 helios64 kernel: [ 4301.876275] el0t_64_sync_handler+0x90/0xb8 Aug 10 17:59:51 helios64 kernel: [ 4301.876645] el0t_64_sync+0x180/0x184 Aug 10 17:59:51 helios64 kernel: [ 4301.877507] ---[ end trace 970b42649c1e42e3 ]--- Edit. After few hours , I saw the errors regarding my HDD. Aug 11 11:32:32 helios64 kernel: [67461.218137] ata1: hard resetting link Aug 11 11:32:32 helios64 kernel: [67461.694283] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Aug 11 11:32:32 helios64 kernel: [67461.696935] ata1.00: configured for UDMA/133 Aug 11 11:32:32 helios64 kernel: [67461.697672] sd 0:0:0:0: [sda] tag#19 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=DRIVER_OK cmd_age=0s Aug 11 11:32:32 helios64 kernel: [67461.698535] sd 0:0:0:0: [sda] tag#19 Sense Key : 0x5 [current] Aug 11 11:32:32 helios64 kernel: [67461.699057] sd 0:0:0:0: [sda] tag#19 ASC=0x21 ASCQ=0x4 Aug 11 11:32:32 helios64 kernel: [67461.699518] sd 0:0:0:0: [sda] tag#19 CDB: opcode=0x88 88 00 00 00 00 00 5c bd e5 50 00 00 08 00 00 00 Aug 11 11:32:32 helios64 kernel: [67461.701293] zio pool=data vdev=/dev/mapper/sda-crypt error=5 type=1 offset=796629049344 size=1048576 flags=40080cb0 Aug 11 11:32:32 helios64 kernel: [67461.702300] sd 0:0:0:0: [sda] tag#20 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=DRIVER_OK cmd_age=0s Aug 11 11:32:32 helios64 kernel: [67461.703147] sd 0:0:0:0: [sda] tag#20 Sense Key : 0x5 [current] Aug 11 11:32:32 helios64 kernel: [67461.703668] sd 0:0:0:0: [sda] tag#20 ASC=0x21 ASCQ=0x4 Aug 11 11:32:32 helios64 kernel: [67461.704127] sd 0:0:0:0: [sda] tag#20 CDB: opcode=0x88 88 00 00 00 00 00 5c bd ed 50 00 00 08 00 00 00 Aug 11 11:32:32 helios64 kernel: [67461.705890] zio pool=data vdev=/dev/mapper/sda-crypt error=5 type=1 offset=796630097920 size=1048576 flags=40080cb0 Aug 11 11:32:32 helios64 kernel: [67461.706862] sd 0:0:0:0: [sda] tag#28 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=DRIVER_OK cmd_age=0s Aug 11 11:32:32 helios64 kernel: [67461.707709] sd 0:0:0:0: [sda] tag#28 Sense Key : 0x5 [current] Aug 11 11:32:32 helios64 kernel: [67461.708228] sd 0:0:0:0: [sda] tag#28 ASC=0x21 ASCQ=0x4 Aug 11 11:32:32 helios64 kernel: [67461.708688] sd 0:0:0:0: [sda] tag#28 CDB: opcode=0x88 88 00 00 00 00 00 5c bd f5 50 00 00 08 00 00 00 Aug 11 11:32:32 helios64 kernel: [67461.710466] zio pool=data vdev=/dev/mapper/sda-crypt error=5 type=1 offset=796631146496 size=1048576 flags=40080cb0 Aug 11 11:32:32 helios64 kernel: [67461.711410] ata1: EH complete So I changed all my setting to the old one ( before the update) GOVERNOR=performance MAX_SPEED=1416000 MIN_SPEED=1416000 and root@helios64:~# cat /boot/armbianEnv.txt verbosity=7 bootlogo=false console=serial extraargs=earlyprintk ignore_loglevel libata.force=3.0 we will see ,whats next. Rebooted system with those settings . ZFS scrub resumed.
  5. missunderstanding I want to create NEW ZFS pool from disks installed at external enclosure. I'm not sure if disks will be correctly exposed at OS level ....and if will work at all ....
  6. Hey guys. I have helios64 with 5x WD14tb red pro.... and ZFS fs. but currently im out of free space I want to extend somehow my storage pool. I'm wonder if it's possible to buy additional storage enclosure like from QNAP ( TL-D800C) - JBOD 1 x USB 3.2 Gen 2 type C and put additional 8x HDD I wanted to create additional ZFS pool raidz1 Do You think it will work?
  7. Thanks @ShadowDance for the tip, my scub test done without errors
  8. Today I manually ran zfs scrub at my zfs mirror pool, I got similar errors. The funny is, that normally ,scrub was initiated by the cron without any issue. Any suggestion what should I do? I done: zpool clear data sda-crypt and zpool clear data sdb-crypt ,at my pools and now I have online state, but I'm little afraid about my data ...... output before today manual zfs scrub - Scrub without errors at Sun Mar 14...... root@helios64:/# zpool status -v data pool: data state: ONLINE scan: scrub repaired 0B in 06:17:38 with 0 errors on Sun Mar 14 06:41:40 2021 config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sda-crypt ONLINE 0 0 0 sdb-crypt ONLINE 0 0 0 cache sdc2 ONLINE 0 0 0 root@helios64:/var/log# cat /etc/cron.d/zfsutils-linux PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # TRIM the first Sunday of every month. 24 0 1-7 * * root if [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/trim ]; then /usr/lib/zfs-linux/trim; fi # Scrub the second Sunday of every month. 24 0 8-14 * * root if [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/scrub ]; then /usr/lib/zfs-linux/scrub; fi root@helios64:/var/log#
  9. I have tried to changed cpu governor to : ondemand root@helios64:/# cat /etc/default/cpufrequtils ENABLE="true" GOVERNOR=ondemand MAX_SPEED=1800000 MIN_SPEED=408000 but after 5min, my server got rebooted . Previously I used : cat /etc/default/cpufrequtils ENABLE="true" GOVERNOR=performance MAX_SPEED=1416000 MIN_SPEED=1416000 for last 3 kernel updates without any issue. Uptime's was mostly about 2-3 weeks without kernel panic or reboots Unfortunately I don't have nothing on my logs. I have changed verbose mode to 7 ,and Im logging to the file my console. I will update when I get error. I have ZFS mirror ( LUKS on the top) + ZFS cache Also I din't changed the disks scheduler. I have bfq , but it was worked since last upgrade without any problem. currently I ran zfs scrub ... we will see. [update 1] : root@helios64:/# uptime 11:02:40 up 2:28, 1 user, load average: 4.80, 4.81, 4.60] scrub is still in progress ,but I got few read errors. I'm not sure if it problem with HDD connector, Hdd scheduler , or cpu freq.... I didn't have it before. [update 2]: Server has been rebooted ..... last 'tasks' what i remember was generating thumbnails by nextcloud . Unfortunately nothing in the console (it was connected all the time ) with: root@helios64:~# cat /boot/armbianEnv.txt verbosity=7 console=serial extraargs=earlyprintk ignore_loglevel failed command: READ FPDMA QUEUED , errors comes from zfs scrum few hours ealier
  10. Hey, I have Mirror ZFS, with LUKS encryption on the top + SSD Cache root@helios64:~# zpool status pool: data state: ONLINE scan: scrub repaired 0B in 05:27:58 with 0 errors on Sun Feb 14 20:06:49 2021 config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sda-crypt ONLINE 0 0 0 sdb-crypt ONLINE 0 0 0 cache sdc2 ONLINE 0 0 0 Here You have my results.. it looks like it is faster to have LUKS at the top ...
  11. grek

    ZFS on Helios64

    Thx @FloBaoti I had to remove openmediavault-zfs first.... root@helios64:/# zfs --version zfs-2.0.1-1 zfs-kmod-2.0.1-1 root@helios64:/# uname -a Linux helios64 5.10.12-rockchip64 #21.02.1 SMP PREEMPT Wed Feb 3 20:55:02 CET 2021 aarch64 GNU/Linux root@helios64:/# Now i have working ZFS again
  12. grek

    ZFS on Helios64

    It looks like the latest update Linux helios64 5.10.12-rockchip64 #21.02.1 broke ZFS dkms After the reboot zfs kernel module is missing... Probably I have to build it manually again
  13. grek

    ZFS on Helios64

    @tionebrr I have built it from this script yesterday without any issue. I only changed a line with make deb to make deb-kmod to create only kmod package (it was needed for me after kernel upgrade) cd "/scratch/$zfsver-$(uname -r)" sh autogen.sh ./configure make -s -j$(nproc) make deb-kmod mkdir "/scratch/deb-$zfsver-$(uname -r)" cp *.deb "/scratch/deb-$zfsver-$(uname -r)" rm -rf "/scratch/$zfsver-$(uname -r)" exit EOF If you have a trouble pls pm me, i can send you this deb package. (mine is: kmod-zfs-5.9.14-rockchip64_2.0.0-1_arm64.deb ) , I have changed zfsver git to 2.0.0 instead of zfsver="zfs-2.0.0-rc6" Also i hashed all cleanup section, and i installed ZFS from official repo and im using only kmod installed manually. In other way I had a problem with OMV :/ root@helios64:~# zfs version zfs-0.8.5-3~bpo10+1 zfs-kmod-2.0.0-1 root@helios64:~# root@helios64:~# dpkg -l |grep zfs ii kmod-zfs-5.9.10-rockchip64 2.0.0-0 arm64 zfs kernel module(s) for 5.9.10-rockchip64 ii kmod-zfs-5.9.11-rockchip64 2.0.0-1 arm64 zfs kernel module(s) for 5.9.11-rockchip64 ii kmod-zfs-5.9.14-rockchip64 2.0.0-1 arm64 zfs kernel module(s) for 5.9.14-rockchip64 ii libzfs2linux 0.8.5-3~bpo10+1 arm64 OpenZFS filesystem library for Linux ii openmediavault-zfs 5.0.5 arm64 OpenMediaVault plugin for ZFS ii zfs-auto-snapshot 1.2.4-2 all ZFS automatic snapshot service iF zfs-dkms 0.8.5-3~bpo10+1 all OpenZFS filesystem kernel modules for Linux ii zfs-dkms-dummy 0.8.4 all Dummy zfs-dkms package for when using built kmod ii zfs-zed 0.8.5-3~bpo10+1 arm64 OpenZFS Event Daemon ii zfsutils-linux 0.8.5-3~bpo10+1 arm64 command-line tools to manage OpenZFS filesystems root@helios64:~# @ShadowDance I forgot to mention that on the top i have full encryption with LUKS. like You yesterday i upgraded OS to Armbian 20.11.3 and kernel to Linux helios64 5.9.14-rockchip64 I posted what version I have. I wanted to have ZFS with OMV and this solution is somehow working for me. If i used all packages from manual build i wasn't force omv zfs plugin to work :/ My swap is off I will change something in the settings if I will face the problem again. currently i finished copy my data and everything is working as desired.
  14. to install OMV You should login via SSH to your helios then: root@helios64:~# armbian-config root@helios64:~# armbian-config Software -> Softy -> OMV You can verify if it installed: root@helios64:~# dpkg -l |grep openmediavault ii openmediavault 5.5.17-3 all openmediavault - The open network attached storage solution ii openmediavault-fail2ban 5.0.5 all OpenMediaVault Fail2ban plugin ii openmediavault-flashmemory 5.0.7 all folder2ram plugin for OpenMediaVault ii openmediavault-keyring 1.0 all GnuPG archive keys of the OpenMediaVault archive ii openmediavault-luksencryption 5.0.2 all OpenMediaVault LUKS encryption plugin ii openmediavault-omvextrasorg 5.4.2 all OMV-Extras.org Package Repositories for OpenMediaVault ii openmediavault-zfs 5.0.5 arm64 OpenMediaVault plugin for ZFS root@helios64:~#
  15. grek

    ZFS on Helios64

    Hey, Do You using compression on Your ZFS datastore ? Yesterday I wanted to do some cleanup (Previously i have only pool created and all directories were there) ,I created new datastore for each 'directory' like home, docker-data etc. Also I setup them with ACL support and compression: zfs set aclinherit=passthrough data/docker-data zfs set acltype=posixacl data/docker-data zfs set xattr=sa data/docker-data zfs set compression=lz4 data/docker-data I successfully moved 3 directories (about 300 GB each) from main 'data' datastore to 'data'/something . The load was quite big and write speed about 40 - 55 MB/s . ( ZFS Mirror - 2x 4TB hdd ) When I tried with another (directory to new datastore) I got kernel panic. I faced this problem twice yesterday. Finally I disabled compression on the rest datastories kernel: 5.9.11 Dec 14 22:46:36 localhost kernel: [ 469.592940] Modules linked in: quota_v2 quota_tree dm_crypt algif_skcipher af_alg dm_mod softdog governor_performance r8152 snd_soc_hdmi_codec pwm_fan gpio_charger leds_pwm snd_soc_rockchip_i2s snd_soc_core rockchip_vdec(C) snd_pcm_dmaengine snd_pcm hantro_vpu(C) rockchip_rga v4l2_h264 snd_timer fusb302 videobuf2_dma_contig rockchipdrm snd videobuf2_dma_sg dw_mipi_dsi v4l2_mem2mem videobuf2_vmalloc dw_hdmi soundcore videobuf2_memops tcpm analogix_dp panfrost videobuf2_v4l2 gpu_sched typec videobuf2_common drm_kms_helper cec videodev rc_core mc sg drm drm_panel_orientation_quirks cpufreq_dt gpio_beeper zfs(POE) zunicode(POE) zzstd(OE) zlua(OE) nfsd auth_rpcgss nfs_acl lockd grace zcommon(POE) znvpair(POE) zavl(POE) icp(POE) spl(OE) sunrpc lm75 ip_tables x_tables autofs4 raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx raid1 raid0 multipath linear md_mod realtek dwmac_rk stmmac_platform stmmac mdio_xpcs adc_keys Dec 14 22:46:36 localhost kernel: [ 469.600200] CPU: 5 PID: 5769 Comm: z_wr_iss Tainted: P C OE 5.9.11-rockchip64 #20.11.1 Dec 14 22:46:36 localhost kernel: [ 469.600982] Hardware name: Helios64 (DT) Dec 14 22:46:36 localhost kernel: [ 469.601328] pstate: 20000005 (nzCv daif -PAN -UAO BTYPE=--) Dec 14 22:46:36 localhost kernel: [ 469.601942] pc : lz4_compress_zfs+0xfc/0x7d0 [zfs] Dec 14 22:46:36 localhost kernel: [ 469.602367] lr : 0x11be8 Dec 14 22:46:36 localhost kernel: [ 469.602591] sp : ffff80001434bbd0 Dec 14 22:46:36 localhost kernel: [ 469.602882] x29: ffff80001434bbd0 x28: ffff0000af99cf80 Dec 14 22:46:36 localhost kernel: [ 469.603349] x27: 00000000000000ff x26: ffff000064adaca8 Dec 14 22:46:36 localhost kernel: [ 469.603815] x25: ffff800022378be8 x24: ffff800012855004 Dec 14 22:46:36 localhost kernel: [ 469.604280] x23: ffff0000a04c4000 x22: ffff8000095c0000 Dec 14 22:46:36 localhost kernel: [ 469.604746] x21: ffff800012855000 x20: 0000000000020000 Dec 14 22:46:36 localhost kernel: [ 469.605212] x19: ffff800022367000 x18: 000000005b5eaa7e Dec 14 22:46:36 localhost kernel: [ 469.605677] x17: 00000000fffffff0 x16: ffff800022386ff8 Dec 14 22:46:36 localhost kernel: [ 469.606143] x15: ffff800022386ffa x14: ffff800022386ffb Dec 14 22:46:36 localhost kernel: [ 469.606609] x13: 00000000ffffffff x12: ffffffffffff0001 Dec 14 22:46:36 localhost kernel: [ 469.607075] x11: ffff800012867a81 x10: 000000009e3779b1 Dec 14 22:46:36 localhost kernel: [ 469.607540] x9 : ffff80002236a2f2 x8 : ffff80002237a2f9 Dec 14 22:46:36 localhost kernel: [ 469.608006] x7 : ffff800012871000 x6 : ffff800022387000 Dec 14 22:46:36 localhost kernel: [ 469.608472] x5 : ffff800022386ff4 x4 : ffff80002237a301 Dec 14 22:46:36 localhost kernel: [ 469.608937] x3 : 0000000000000239 x2 : ffff80002237a2f1 Dec 14 22:46:36 localhost kernel: [ 469.609402] x1 : ffff800022379a3b x0 : 00000000000005b5 Dec 14 22:46:36 localhost kernel: [ 469.609870] Call trace: Dec 14 22:46:36 localhost kernel: [ 469.610234] lz4_compress_zfs+0xfc/0x7d0 [zfs] Dec 14 22:46:36 localhost kernel: [ 469.610737] zio_compress_data+0xc0/0x13c [zfs] Dec 14 22:46:36 localhost kernel: [ 469.611243] zio_write_compress+0x294/0x6d0 [zfs] Dec 14 22:46:36 localhost kernel: [ 469.611764] zio_execute+0xa4/0x144 [zfs] Dec 14 22:46:36 localhost kernel: [ 469.612137] taskq_thread+0x278/0x450 [spl] Dec 14 22:46:36 localhost kernel: [ 469.612517] kthread+0x118/0x150 Dec 14 22:46:36 localhost kernel: [ 469.612804] ret_from_fork+0x10/0x34 Dec 14 22:46:36 localhost kernel: [ 469.613660] ---[ end trace 34e3e71ff3a20382 ]---
  16. Did You tried to jump armbian 20.11 and kernel 5.9. 10. ? Currently Im using 5.9.10 begin friday without any issues. Before it 5.9.9? and i successfully copied more than 2 TB data. (zfs - raid1) Also i ran docker and compiled few times ZFS from sources . The load was big but i don't see any issues.
  17. grek

    ZFS on Helios64

    Hey, I think a dedicated topic for ZFS on Helios 64 is needed. Many people want to have it As we know, and many of us playing with dev builds for helios ,it is no easy to work with ZFS I wrote few scripts maybe someone of You it can help in someway. Thanks @jbergler and @ShadowDance , I used your idea for complete it. The problem is with playing with OMV and ZFS plugin. then dependencies remove our latest version of ZFS. I tested. everything survived but ZFS is downgraded to latest version from repo for example: root@helios64:~# zfs --version zfs-0.8.4-2~bpo10+1 zfs-kmod-2.0.0-rc6 root@helios64:~# uname -a Linux helios64 5.9.11-rockchip64 #trunk.1 SMP PREEMPT Thu Nov 26 01:32:45 CET 2020 aarch64 GNU/Linux root@helios64:~# I tested it with kernel 5.9.10 and with today clean install with 5.9.11 First we need to have docker installed ( armbian-config -> software -> softy -> docker ) Create dedicated directory with Dockerfile ( we will customize ubuntu:bionic image with required libraries and gcc 10 . in next builds we can skip this step and just customize and run build-zfs.sh script) mkdir zfs-builder cd zfs-builder vi Dockerfile FROM ubuntu:bionic RUN apt update; apt install build-essential autoconf automake bison flex libtool gawk alien fakeroot dkms libblkid-dev uuid-dev libudev-dev libssl-dev zlib1g-dev libaio-dev libattr1-dev libelf-dev python3 python3-dev python3-setuptools python3-cffi libffi-dev -y; apt install software-properties-common -y; add-apt-repository ppa:ubuntu-toolchain-r/test; apt install gcc-10 g++-10 -y; update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-10 10; update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-10 10 Build docker image for building purposes. docker build --tag zfs-build-ubuntu-bionic:0.1 . Create script for ZFS build vi build-zfs.sh #!/bin/bash #define zfs version zfsver="zfs-2.0.0-rc6" #creating building directory mkdir /tmp/zfs-builds && cd "$_" rm -rf /tmp/zfs-builds/inside_zfs.sh apt-get download linux-headers-current-rockchip64 git clone -b $zfsver https://github.com/openzfs/zfs.git $zfsver-$(uname -r) #create file to execute inside container echo "creating file to execute it inside container" cat > /tmp/zfs-builds/inside_zfs.sh <<EOF #!/bin/bash cd scratch/ dpkg -i linux-headers-current-*.deb zfsver="zfs-2.0.0-rc6" cd "/scratch/$zfsver-$(uname -r)" sh autogen.sh ./configure make -s -j$(nproc) make deb mkdir "/scratch/deb-$zfsver-$(uname -r)" cp *.deb "/scratch/deb-$zfsver-$(uname -r)" rm -rf "/scratch/$zfsver-$(uname -r)" exit EOF chmod +x /tmp/zfs-builds/inside_zfs.sh echo "" echo "####################" echo "starting container.." echo "####################" echo "" docker run --rm -it -v /tmp/zfs-builds:/scratch zfs-build-ubuntu-bionic:0.1 /bin/bash /scratch/inside_zfs.sh # Cleanup packages (if installed). modprobe -r zfs zunicode zzstd zlua zcommon znvpair zavl icp spl apt remove --yes zfsutils-linux zfs-zed zfs-initramfs apt autoremove --yes dpkg -i "/tmp/zfs-builds/deb-$zfsver-$(uname -r)"/kmod-zfs-$(uname -r)*.deb dpkg -i "/tmp/zfs-builds/deb-$zfsver-$(uname -r)"/{libnvpair1,libuutil1,libzfs2,libzpool2,python3-pyzfs,zfs}_*.deb echo "" echo "###################" echo "building complete" echo "###################" echo "" chmod +x build-zfs.sh screen -L -Logfile buildlog.txt ./build-zfs-final.sh
  18. Hey I was able to build ZFS zfs-2.0.0-rc6 from sources using @jbergler instructions on my helios64 . debian (inside docker ubuntu container) root@helios64:~# uname -a Linux helios64 5.9.10-rockchip64 #trunk.39 SMP PREEMPT Sun Nov 22 10:36:59 CET 2020 aarch64 GNU/Linux root@helios64:~# but now im stuck with old glibc version. root@helios64:~/zfs_deb# zfs zfs: /lib/aarch64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /lib/aarch64-linux-gnu/libzfs.so.4) root@helios64:~/zfs_deb# zfs --version zfs: /lib/aarch64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /lib/aarch64-linux-gnu/libzfs.so.4) root@helios64:~/zfs_deb# root@helios64:~/zfs_deb# ldd --version ldd (Debian GLIBC 2.28-10) 2.28 Copyright (C) 2018 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Written by Roland McGrath and Ulrich Drepper. I also installed kmod-zfs instead of zfs-dkms_2.0.0-0_arm64.deb package. When I tried to install zfs-dkms i got errors: Setting up libnvpair1 (2.0.0-0) ... Setting up libuutil1 (2.0.0-0) ... Setting up libzfs2-devel (2.0.0-0) ... Setting up libzfs2 (2.0.0-0) ... Setting up libzpool2 (2.0.0-0) ... Setting up python3-pyzfs (2.0.0-0) ... Setting up zfs-dkms (2.0.0-0) ... WARNING: /usr/lib/dkms/common.postinst does not exist. ERROR: DKMS version is too old and zfs was not built with legacy DKMS support. You must either rebuild zfs with legacy postinst support or upgrade DKMS to a more current version. dpkg: error processing package zfs-dkms (--install): installed zfs-dkms package post-installation script subprocess returned error exit status 1 Setting up zfs-dracut (2.0.0-0) ... Setting up zfs-initramfs (2.0.0-0) ... Setting up zfs-test (2.0.0-0) ... Setting up zfs (2.0.0-0) ... Processing triggers for libc-bin (2.28-10) ... Processing triggers for systemd (241-7~deb10u4) ... Processing triggers for man-db (2.8.5-2) ... Errors were encountered while processing: zfs-dkms
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines