Jump to content

grek

Members
  • Posts

    19
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I installed Armbian_23.5.4_Helios64_bookworm_current_6.1.36 and kernel helios64 6.6.8-edge-rockchip64. Generally it is ok. First from long time I do not need to limit my sata to 3gbit , and I using cpu governor : on demand with min: 408000 max: 1200000 . ,instead of performance 1416000 My configuration its little specific because Im using LUKS crypt on top of my ZFS mirror (2x 4TB) + 1 cache SSD + 2 hdd EXT4 encrypted with ZFS. I tested NFS yesterday , and I successfully downloaded ~ 700GB without any errors . only what is not working as expected, that running sbc-bench.sh , causing kernel panic . ( http://sprunge.us/3VLJNX ) I will still test it, but it good point to start. [ 3500.389590] Internal error: Oops: 0000000086000004 [#1] PREEMPT SMP [ 3500.390166] Modules linked in: governor_powersave governor_performance dm_crypt trusted asn1_encoder nfsv3 nfs fscache netfs rfkill lz4hc lz4 zram binfmt_misc zfs(PO) spl(O) snd_soc_hdmi_codec snd_soc_rockchip_i2s crct10dif_ce rockchip_vdec(C) hantro_vpu snd_soc_core rockchip_rga v4l2_vp9 leds_pwm snd_compress v4l2_h264 ac97_bus videobuf2_dma_contig v4l2_mem2mem snd_pcm_dmaengine videobuf2_dma_sg snd_pcm pwm_fan panfrost gpu_sched gpio_charger drm_shmem_helper videobuf2_memops videobuf2_v4l2 snd_timer rk_crypto videodev snd videobuf2_common nvmem_rockchip_efuse mc soundcore gpio_beeper ledtrig_netdev nfsd lm75 auth_rpcgss nfs_acl lockd grace sunrpc ip_tables x_tables autofs4 efivarfs raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx raid1 raid0 multipath linear cdc_ncm cdc_ether usbnet r8152 rockchipdrm realtek dw_mipi_dsi dw_hdmi analogix_dp fusb302 drm_display_helper tcpm cec typec dwmac_rk stmmac_platform drm_dma_helper drm_kms_helper stmmac drm adc_keys pcs_xpcs [ 3500.398227] CPU: 4 PID: 0 Comm: swapper/4 Tainted: P C O 6.6.8-edge-rockchip64 #1 [ 3500.399007] Hardware name: Helios64 (DT) [ 3500.399362] pstate: 000000c5 (nzcv daIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--) [ 3500.399988] pc : psi_group_change+0xc8/0x358 [ 3500.400388] lr : psi_task_change+0x8c/0xbc [ 3500.400767] sp : ffff80008203bd60 [ 3500.401069] x29: ffff80008203bd70 x28: ffff800075e65000 x27: ffff800081cac8e8 [ 3500.401720] x26: ffff800081909d40 x25: 0000000000000000 x24: ffff800081907008 [ 3500.402368] x23: 0000000000000001 x22: 0000000000000000 x21: 0000000000000000 [ 3500.403015] x20: ffff0000f776ed40 x19: 0000000000000004 x18: 0000000000000000 [ 3500.403662] x17: ffff800075e65000 x16: ffff800082038000 x15: 0000000000000001 [ 3500.404309] x14: 0000000000000000 x13: ffff800081581d50 x12: ffff800081c89b68 [ 3500.404957] x11: 0000000000000040 x10: 0000000000000004 x9 : 0000032eff793e13 [ 3500.405605] x8 : ffff800081c8dd28 x7 : 0000000000000000 x6 : 000000001c036006 [ 3500.406252] x5 : 0000000000000001 x4 : 0000000000000001 x3 : 0000000000000001 [ 3500.406898] x2 : 0000000000000001 x1 : 0000000000000004 x0 : ffff0000f776ed40 [ 3500.407545] Call trace: [ 3500.407771] psi_group_change+0xc8/0x358 [ 3500.408131] psi_task_change+0x8c/0xbc [ 3500.408479] activate_task+0xcc/0x128 [ 3500.408818] ttwu_do_activate+0x58/0x274 [ 3500.409180] sched_ttwu_pending+0xe4/0x1c0 [ 3500.409557] __flush_smp_call_function_queue+0x14c/0x458 [ 3500.410042] generic_smp_call_function_single_interrupt+0x14/0x20 [ 3500.410594] ipi_handler+0x84/0x250 [ 3500.410920] handle_percpu_devid_irq+0xa4/0x230 [ 3500.411337] generic_handle_domain_irq+0x2c/0x44 [ 3500.411756] gic_handle_irq+0x50/0x128 [ 3500.412097] call_on_irq_stack+0x24/0x4c [ 3500.412458] do_interrupt_handler+0xd4/0xd8 [ 3500.412842] el1_interrupt+0x34/0x68 [ 3500.413173] el1h_64_irq_handler+0x18/0x24 [ 3500.413548] el1h_64_irq+0x64/0x68 [ 3500.413860] cpuidle_enter_state+0xc0/0x4bc [ 3500.414246] cpuidle_enter+0x38/0x50 [ 3500.414577] do_idle+0x1fc/0x270 [ 3500.414878] cpu_startup_entry+0x38/0x3c [ 3500.415238] secondary_start_kernel+0x128/0x148 [ 3500.415653] __secondary_switched+0xb8/0xbc [ 3500.416043] Code: 54fffec1 52800002 52800024 34000173 (8b224a80) [ 3500.416592] ---[ end trace 0000000000000000 ]--- [ 3500.417009] Kernel panic - not syncing: Oops: Fatal exception in interrupt [ 3500.417621] SMP: stopping secondary CPUs [ 3500.418110] Kernel Offset: disabled [ 3500.418425] CPU features: 0x1,00000208,3c020000,1000421b [ 3500.418903] Memory Limit: none [ 3500.419192] ---[ end Kernel panic - not syncing: Oops: Fatal exception in interrupt ]---
  2. Finally I do a rollback to my old config. armbian 21.08.8 with kernel: 5.10.63-rockchip64 I got reboots during NFS copy to helios .So during about 1 month I got 3 reboots.
  3. @ebin-dev unfortunately I can't I faced another problem with OMV6 and LUKS and ZFS 😕 . Everything was good until the reboot. After the reboot I don't know why, but Shared directories have missed device (probably UUID was changed... ) . I tried to setup cryptsetup at the boot ,to encrypt my disks and then import ZFS pool ,but passphrase do not came out during a boot process (even I recompiled kernel with options found there: FInaly I'm using workaround: keyfiles at the /boot/ ,but even that sometimes I faced the problem that only 1 disk has been encripted only (so my zfs pool was incompleted 😕 ) . I added ExecStartPre=/usr/bin/sleep 30 , to [service] at /lib/systemd/system/zfs-import-cache.service and it looks like now it is working.... (tried 3 times) .We will see . Also the bad thing is , that I tried to back to my old configuration at emmc , but OMV5 also don't see devices at shared directories. It will work, after removing all of them and setup from the scratch . I don't know if its any progress with performance ARM with native ZFS encription , but maybe I will force to setup it up..... instead of LUKS and ZFS.
  4. @usefulnoise yesterday I ran helios64 with Armbian 22.08.0 Bullseye (Debian) and kernel 5.15.59-rockchip64 I build image from the armbian sources. I tried to include zfs with image but i give up. Finally I successfully install zfs-dkms from the backports repos apt -t bullseye-backports install zfs-dkms zfsutils-linux modules has been builded without any errors. Also I successfully install OMV 6 with ZFS plugin and LUKSEncryption. root@helios64:~# zfs --version zfs-2.1.5-1~bpo11+1 zfs-kmod-2.1.5-1~bpo11+1 root@helios64:~# uname -a Linux helios64 5.15.59-rockchip64 #trunk SMP PREEMPT Wed Aug 10 11:29:36 UTC 2022 aarch64 GNU/Linux root@helios64:~# right now I have 16h without the reboot. I leave system at defaults for the tests. root@helios64:~# cat /boot/armbianEnv.txt verbosity=1 bootlogo=false overlay_prefix=rockchip rootdev=UUID=04ed9a40-6590-4491-8d58-0d647b8cc9db rootfstype=ext4 usbstoragequirks=0x2537:0x1066:u,0x2537:0x1068:u root@helios64:~# root@helios64:~# cat /etc/default/cpufrequtils ENABLE="true" GOVERNOR="ondemand" MAX_SPEED="0" MIN_SPEED="0" root@helios64:~# Im in the middle ZFS scrub . We will see if its complete without errors. Currently I saw one problem. Yesterday I ran screen with coping some data over NFS. Its failed with kernel error. Probably I will need to go back to my old cpufreq. GOVERNOR=performance MAX_SPEED=1416000 MIN_SPEED=1416000 Aug 10 17:48:21 helios64 kernel: [ 3612.621208] FS-Cache: Loaded Aug 10 17:48:22 helios64 kernel: [ 3612.787472] FS-Cache: Netfs 'nfs' registered for caching Aug 10 17:59:51 helios64 kernel: [ 4301.850819] Modules linked in: nfsv3 nfs fscache netfs dm_crypt dm_mod r8152 snd_soc_hdmi_codec leds_pwm gpio_charger pwm_fan panfrost snd_soc_rock chip_i2s gpu_sched snd_soc_rockchip_pcm sg snd_soc_core snd_pcm_dmaengine snd_pcm rockchip_vdec(C) hantro_vpu(C) snd_timer rockchip_iep v4l2_h264 snd rockchip_rga videobuf2_dma_contig videobuf2_vmalloc v4l2_mem2mem videobuf2_dma_sg videobuf2_memops fusb302 soundcore videobuf2_v4l2 tcpm videobuf2_common typec videodev mc gpio_beeper cpufreq_dt zfs(POE) zunicode(POE ) zzstd(OE) nfsd zlua(OE) zcommon(POE) znvpair(POE) zavl(POE) auth_rpcgss icp(POE) nfs_acl lockd spl(OE) grace softdog ledtrig_netdev lm75 sunrpc ip_tables x_tables autofs4 raid10 rai d456 async_raid6_recov async_memcpy async_pq async_xor async_tx raid1 raid0 multipath linear md_mod realtek dwmac_rk stmmac_platform stmmac pcs_xpcs adc_keys Aug 10 17:59:51 helios64 kernel: [ 4301.857423] CPU: 5 PID: 79841 Comm: mc Tainted: P C OE 5.15.59-rockchip64 #trunk Aug 10 17:59:51 helios64 kernel: [ 4301.858163] Hardware name: Helios64 (DT) Aug 10 17:59:51 helios64 kernel: [ 4301.858509] pstate: 40000005 (nZcv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--) Aug 10 17:59:51 helios64 kernel: [ 4301.859119] pc : nfs_generic_pg_pgios+0x60/0xc0 [nfs] Aug 10 17:59:51 helios64 kernel: [ 4301.859619] lr : nfs_generic_pg_pgios+0x5c/0xc0 [nfs] Aug 10 17:59:51 helios64 kernel: [ 4301.860096] sp : ffff80000e2b35c0 Aug 10 17:59:51 helios64 kernel: [ 4301.860388] x29: ffff80000e2b35c0 x28: ffff00006c52d700 x27: ffff00006c52d700 Aug 10 17:59:51 helios64 kernel: [ 4301.861018] x26: ffff80000e2b3880 x25: ffff80000954be68 x24: 0000000000000400 Aug 10 17:59:51 helios64 kernel: [ 4301.861646] x23: 0000000000000000 x22: ffff00005c2f8780 x21: ffff8000019fd370 Aug 10 17:59:51 helios64 kernel: [ 4301.862273] x20: ffff80000e2b3828 x19: ffff00009ff3a580 x18: 0000000000000001 Aug 10 17:59:51 helios64 kernel: [ 4301.862901] x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000002 Aug 10 17:59:51 helios64 kernel: [ 4301.863529] x14: 0000000000000080 x13: 000000000001b896 x12: ffff0000304cf9b8 Aug 10 17:59:51 helios64 kernel: [ 4301.864157] x11: ffff00009ff3a418 x10: ffff00009ff3a488 x9 : 0000000000000000 Aug 10 17:59:51 helios64 kernel: [ 4301.864784] x8 : ffff00009ff3a908 x7 : 0000000000000000 x6 : ffff00009ff3a590 Aug 10 17:59:51 helios64 kernel: [ 4301.865411] x5 : 0000000000000096 x4 : ffff00009ff3a6f8 x3 : 0000000000000800 Aug 10 17:59:51 helios64 kernel: [ 4301.866039] x2 : 50c2f8bdeb7ff200 x1 : 0000000000000000 x0 : 0000000000000000 Aug 10 17:59:51 helios64 kernel: [ 4301.866667] Call trace: Aug 10 17:59:51 helios64 kernel: [ 4301.866883] nfs_generic_pg_pgios+0x60/0xc0 [nfs] Aug 10 17:59:51 helios64 kernel: [ 4301.867331] nfs_pageio_doio+0x48/0xa0 [nfs] Aug 10 17:59:51 helios64 kernel: [ 4301.867739] __nfs_pageio_add_request+0x148/0x420 [nfs] Aug 10 17:59:51 helios64 kernel: [ 4301.868230] nfs_pageio_add_request_mirror+0x40/0x60 [nfs] Aug 10 17:59:51 helios64 kernel: [ 4301.868743] nfs_pageio_add_request+0x208/0x290 [nfs] Aug 10 17:59:51 helios64 kernel: [ 4301.869220] readpage_async_filler+0x20c/0x400 [nfs] Aug 10 17:59:51 helios64 kernel: [ 4301.869688] read_cache_pages+0xcc/0x198 Aug 10 17:59:51 helios64 kernel: [ 4301.870039] nfs_readpages+0x138/0x1b8 [nfs] Aug 10 17:59:51 helios64 kernel: [ 4301.870447] read_pages+0x188/0x288 Aug 10 17:59:51 helios64 kernel: [ 4301.870757] page_cache_ra_unbounded+0x148/0x248 Aug 10 17:59:51 helios64 kernel: [ 4301.871163] do_page_cache_ra+0x40/0x50 Aug 10 17:59:51 helios64 kernel: [ 4301.871501] ondemand_readahead+0x128/0x2c8 Aug 10 17:59:51 helios64 kernel: [ 4301.871870] page_cache_async_ra+0xd4/0xd8 Aug 10 17:59:51 helios64 kernel: [ 4301.872232] filemap_get_pages+0x1fc/0x568 Aug 10 17:59:51 helios64 kernel: [ 4301.872595] filemap_read+0xac/0x340 Aug 10 17:59:51 helios64 kernel: [ 4301.872911] generic_file_read_iter+0xf4/0x188 Aug 10 17:59:51 helios64 kernel: [ 4301.873304] nfs_file_read+0x9c/0x128 [nfs] Aug 10 17:59:51 helios64 kernel: [ 4301.873706] new_sync_read+0x104/0x180 Aug 10 17:59:51 helios64 kernel: [ 4301.874040] vfs_read+0x148/0x1e0 Aug 10 17:59:51 helios64 kernel: [ 4301.874334] ksys_read+0x68/0xf0 Aug 10 17:59:51 helios64 kernel: [ 4301.874622] __arm64_sys_read+0x1c/0x28 Aug 10 17:59:51 helios64 kernel: [ 4301.874960] invoke_syscall+0x44/0x108 Aug 10 17:59:51 helios64 kernel: [ 4301.875293] el0_svc_common.constprop.3+0x94/0xf8 Aug 10 17:59:51 helios64 kernel: [ 4301.875709] do_el0_svc+0x24/0x88 Aug 10 17:59:51 helios64 kernel: [ 4301.876003] el0_svc+0x20/0x50 Aug 10 17:59:51 helios64 kernel: [ 4301.876275] el0t_64_sync_handler+0x90/0xb8 Aug 10 17:59:51 helios64 kernel: [ 4301.876645] el0t_64_sync+0x180/0x184 Aug 10 17:59:51 helios64 kernel: [ 4301.877507] ---[ end trace 970b42649c1e42e3 ]--- Edit. After few hours , I saw the errors regarding my HDD. Aug 11 11:32:32 helios64 kernel: [67461.218137] ata1: hard resetting link Aug 11 11:32:32 helios64 kernel: [67461.694283] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Aug 11 11:32:32 helios64 kernel: [67461.696935] ata1.00: configured for UDMA/133 Aug 11 11:32:32 helios64 kernel: [67461.697672] sd 0:0:0:0: [sda] tag#19 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=DRIVER_OK cmd_age=0s Aug 11 11:32:32 helios64 kernel: [67461.698535] sd 0:0:0:0: [sda] tag#19 Sense Key : 0x5 [current] Aug 11 11:32:32 helios64 kernel: [67461.699057] sd 0:0:0:0: [sda] tag#19 ASC=0x21 ASCQ=0x4 Aug 11 11:32:32 helios64 kernel: [67461.699518] sd 0:0:0:0: [sda] tag#19 CDB: opcode=0x88 88 00 00 00 00 00 5c bd e5 50 00 00 08 00 00 00 Aug 11 11:32:32 helios64 kernel: [67461.701293] zio pool=data vdev=/dev/mapper/sda-crypt error=5 type=1 offset=796629049344 size=1048576 flags=40080cb0 Aug 11 11:32:32 helios64 kernel: [67461.702300] sd 0:0:0:0: [sda] tag#20 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=DRIVER_OK cmd_age=0s Aug 11 11:32:32 helios64 kernel: [67461.703147] sd 0:0:0:0: [sda] tag#20 Sense Key : 0x5 [current] Aug 11 11:32:32 helios64 kernel: [67461.703668] sd 0:0:0:0: [sda] tag#20 ASC=0x21 ASCQ=0x4 Aug 11 11:32:32 helios64 kernel: [67461.704127] sd 0:0:0:0: [sda] tag#20 CDB: opcode=0x88 88 00 00 00 00 00 5c bd ed 50 00 00 08 00 00 00 Aug 11 11:32:32 helios64 kernel: [67461.705890] zio pool=data vdev=/dev/mapper/sda-crypt error=5 type=1 offset=796630097920 size=1048576 flags=40080cb0 Aug 11 11:32:32 helios64 kernel: [67461.706862] sd 0:0:0:0: [sda] tag#28 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=DRIVER_OK cmd_age=0s Aug 11 11:32:32 helios64 kernel: [67461.707709] sd 0:0:0:0: [sda] tag#28 Sense Key : 0x5 [current] Aug 11 11:32:32 helios64 kernel: [67461.708228] sd 0:0:0:0: [sda] tag#28 ASC=0x21 ASCQ=0x4 Aug 11 11:32:32 helios64 kernel: [67461.708688] sd 0:0:0:0: [sda] tag#28 CDB: opcode=0x88 88 00 00 00 00 00 5c bd f5 50 00 00 08 00 00 00 Aug 11 11:32:32 helios64 kernel: [67461.710466] zio pool=data vdev=/dev/mapper/sda-crypt error=5 type=1 offset=796631146496 size=1048576 flags=40080cb0 Aug 11 11:32:32 helios64 kernel: [67461.711410] ata1: EH complete So I changed all my setting to the old one ( before the update) GOVERNOR=performance MAX_SPEED=1416000 MIN_SPEED=1416000 and root@helios64:~# cat /boot/armbianEnv.txt verbosity=7 bootlogo=false console=serial extraargs=earlyprintk ignore_loglevel libata.force=3.0 we will see ,whats next. Rebooted system with those settings . ZFS scrub resumed.
  5. missunderstanding I want to create NEW ZFS pool from disks installed at external enclosure. I'm not sure if disks will be correctly exposed at OS level ....and if will work at all ....
  6. Hey guys. I have helios64 with 5x WD14tb red pro.... and ZFS fs. but currently im out of free space I want to extend somehow my storage pool. I'm wonder if it's possible to buy additional storage enclosure like from QNAP ( TL-D800C) - JBOD 1 x USB 3.2 Gen 2 type C and put additional 8x HDD I wanted to create additional ZFS pool raidz1 Do You think it will work?
  7. Thanks @ShadowDance for the tip, my scub test done without errors
  8. Today I manually ran zfs scrub at my zfs mirror pool, I got similar errors. The funny is, that normally ,scrub was initiated by the cron without any issue. Any suggestion what should I do? I done: zpool clear data sda-crypt and zpool clear data sdb-crypt ,at my pools and now I have online state, but I'm little afraid about my data ...... output before today manual zfs scrub - Scrub without errors at Sun Mar 14...... root@helios64:/# zpool status -v data pool: data state: ONLINE scan: scrub repaired 0B in 06:17:38 with 0 errors on Sun Mar 14 06:41:40 2021 config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sda-crypt ONLINE 0 0 0 sdb-crypt ONLINE 0 0 0 cache sdc2 ONLINE 0 0 0 root@helios64:/var/log# cat /etc/cron.d/zfsutils-linux PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # TRIM the first Sunday of every month. 24 0 1-7 * * root if [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/trim ]; then /usr/lib/zfs-linux/trim; fi # Scrub the second Sunday of every month. 24 0 8-14 * * root if [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/scrub ]; then /usr/lib/zfs-linux/scrub; fi root@helios64:/var/log#
  9. I have tried to changed cpu governor to : ondemand root@helios64:/# cat /etc/default/cpufrequtils ENABLE="true" GOVERNOR=ondemand MAX_SPEED=1800000 MIN_SPEED=408000 but after 5min, my server got rebooted . Previously I used : cat /etc/default/cpufrequtils ENABLE="true" GOVERNOR=performance MAX_SPEED=1416000 MIN_SPEED=1416000 for last 3 kernel updates without any issue. Uptime's was mostly about 2-3 weeks without kernel panic or reboots Unfortunately I don't have nothing on my logs. I have changed verbose mode to 7 ,and Im logging to the file my console. I will update when I get error. I have ZFS mirror ( LUKS on the top) + ZFS cache Also I din't changed the disks scheduler. I have bfq , but it was worked since last upgrade without any problem. currently I ran zfs scrub ... we will see. [update 1] : root@helios64:/# uptime 11:02:40 up 2:28, 1 user, load average: 4.80, 4.81, 4.60] scrub is still in progress ,but I got few read errors. I'm not sure if it problem with HDD connector, Hdd scheduler , or cpu freq.... I didn't have it before. [update 2]: Server has been rebooted ..... last 'tasks' what i remember was generating thumbnails by nextcloud . Unfortunately nothing in the console (it was connected all the time ) with: root@helios64:~# cat /boot/armbianEnv.txt verbosity=7 console=serial extraargs=earlyprintk ignore_loglevel failed command: READ FPDMA QUEUED , errors comes from zfs scrum few hours ealier
  10. Hey, I have Mirror ZFS, with LUKS encryption on the top + SSD Cache root@helios64:~# zpool status pool: data state: ONLINE scan: scrub repaired 0B in 05:27:58 with 0 errors on Sun Feb 14 20:06:49 2021 config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sda-crypt ONLINE 0 0 0 sdb-crypt ONLINE 0 0 0 cache sdc2 ONLINE 0 0 0 Here You have my results.. it looks like it is faster to have LUKS at the top ...
  11. grek

    ZFS on Helios64

    Thx @FloBaoti I had to remove openmediavault-zfs first.... root@helios64:/# zfs --version zfs-2.0.1-1 zfs-kmod-2.0.1-1 root@helios64:/# uname -a Linux helios64 5.10.12-rockchip64 #21.02.1 SMP PREEMPT Wed Feb 3 20:55:02 CET 2021 aarch64 GNU/Linux root@helios64:/# Now i have working ZFS again
  12. grek

    ZFS on Helios64

    It looks like the latest update Linux helios64 5.10.12-rockchip64 #21.02.1 broke ZFS dkms After the reboot zfs kernel module is missing... Probably I have to build it manually again
  13. grek

    ZFS on Helios64

    @tionebrr I have built it from this script yesterday without any issue. I only changed a line with make deb to make deb-kmod to create only kmod package (it was needed for me after kernel upgrade) cd "/scratch/$zfsver-$(uname -r)" sh autogen.sh ./configure make -s -j$(nproc) make deb-kmod mkdir "/scratch/deb-$zfsver-$(uname -r)" cp *.deb "/scratch/deb-$zfsver-$(uname -r)" rm -rf "/scratch/$zfsver-$(uname -r)" exit EOF If you have a trouble pls pm me, i can send you this deb package. (mine is: kmod-zfs-5.9.14-rockchip64_2.0.0-1_arm64.deb ) , I have changed zfsver git to 2.0.0 instead of zfsver="zfs-2.0.0-rc6" Also i hashed all cleanup section, and i installed ZFS from official repo and im using only kmod installed manually. In other way I had a problem with OMV :/ root@helios64:~# zfs version zfs-0.8.5-3~bpo10+1 zfs-kmod-2.0.0-1 root@helios64:~# root@helios64:~# dpkg -l |grep zfs ii kmod-zfs-5.9.10-rockchip64 2.0.0-0 arm64 zfs kernel module(s) for 5.9.10-rockchip64 ii kmod-zfs-5.9.11-rockchip64 2.0.0-1 arm64 zfs kernel module(s) for 5.9.11-rockchip64 ii kmod-zfs-5.9.14-rockchip64 2.0.0-1 arm64 zfs kernel module(s) for 5.9.14-rockchip64 ii libzfs2linux 0.8.5-3~bpo10+1 arm64 OpenZFS filesystem library for Linux ii openmediavault-zfs 5.0.5 arm64 OpenMediaVault plugin for ZFS ii zfs-auto-snapshot 1.2.4-2 all ZFS automatic snapshot service iF zfs-dkms 0.8.5-3~bpo10+1 all OpenZFS filesystem kernel modules for Linux ii zfs-dkms-dummy 0.8.4 all Dummy zfs-dkms package for when using built kmod ii zfs-zed 0.8.5-3~bpo10+1 arm64 OpenZFS Event Daemon ii zfsutils-linux 0.8.5-3~bpo10+1 arm64 command-line tools to manage OpenZFS filesystems root@helios64:~# @ShadowDance I forgot to mention that on the top i have full encryption with LUKS. like You yesterday i upgraded OS to Armbian 20.11.3 and kernel to Linux helios64 5.9.14-rockchip64 I posted what version I have. I wanted to have ZFS with OMV and this solution is somehow working for me. If i used all packages from manual build i wasn't force omv zfs plugin to work :/ My swap is off I will change something in the settings if I will face the problem again. currently i finished copy my data and everything is working as desired.
  14. to install OMV You should login via SSH to your helios then: root@helios64:~# armbian-config root@helios64:~# armbian-config Software -> Softy -> OMV You can verify if it installed: root@helios64:~# dpkg -l |grep openmediavault ii openmediavault 5.5.17-3 all openmediavault - The open network attached storage solution ii openmediavault-fail2ban 5.0.5 all OpenMediaVault Fail2ban plugin ii openmediavault-flashmemory 5.0.7 all folder2ram plugin for OpenMediaVault ii openmediavault-keyring 1.0 all GnuPG archive keys of the OpenMediaVault archive ii openmediavault-luksencryption 5.0.2 all OpenMediaVault LUKS encryption plugin ii openmediavault-omvextrasorg 5.4.2 all OMV-Extras.org Package Repositories for OpenMediaVault ii openmediavault-zfs 5.0.5 arm64 OpenMediaVault plugin for ZFS root@helios64:~#
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines