Jump to content

grek

Members
  • Posts

    18
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. grek

    5.15 broke zfs-dkms

    Finally I do a rollback to my old config. armbian 21.08.8 with kernel: 5.10.63-rockchip64 I got reboots during NFS copy to helios .So during about 1 month I got 3 reboots.
  2. grek

    5.15 broke zfs-dkms

    @ebin-dev unfortunately I can't I faced another problem with OMV6 and LUKS and ZFS 😕 . Everything was good until the reboot. After the reboot I don't know why, but Shared directories have missed device (probably UUID was changed... ) . I tried to setup cryptsetup at the boot ,to encrypt my disks and then import ZFS pool ,but passphrase do not came out during a boot process (even I recompiled kernel with options found there: FInaly I'm using workaround: keyfiles at the /boot/ ,but even that sometimes I faced the problem that only 1 disk has been encripted only (so my zfs pool was incompleted 😕 ) . I added ExecStartPre=/usr/bin/sleep 30 , to [service] at /lib/systemd/system/zfs-import-cache.service and it looks like now it is working.... (tried 3 times) .We will see . Also the bad thing is , that I tried to back to my old configuration at emmc , but OMV5 also don't see devices at shared directories. It will work, after removing all of them and setup from the scratch . I don't know if its any progress with performance ARM with native ZFS encription , but maybe I will force to setup it up..... instead of LUKS and ZFS.
  3. grek

    5.15 broke zfs-dkms

    @usefulnoise yesterday I ran helios64 with Armbian 22.08.0 Bullseye (Debian) and kernel 5.15.59-rockchip64 I build image from the armbian sources. I tried to include zfs with image but i give up. Finally I successfully install zfs-dkms from the backports repos apt -t bullseye-backports install zfs-dkms zfsutils-linux modules has been builded without any errors. Also I successfully install OMV 6 with ZFS plugin and LUKSEncryption. root@helios64:~# zfs --version zfs-2.1.5-1~bpo11+1 zfs-kmod-2.1.5-1~bpo11+1 root@helios64:~# uname -a Linux helios64 5.15.59-rockchip64 #trunk SMP PREEMPT Wed Aug 10 11:29:36 UTC 2022 aarch64 GNU/Linux root@helios64:~# right now I have 16h without the reboot. I leave system at defaults for the tests. root@helios64:~# cat /boot/armbianEnv.txt verbosity=1 bootlogo=false overlay_prefix=rockchip rootdev=UUID=04ed9a40-6590-4491-8d58-0d647b8cc9db rootfstype=ext4 usbstoragequirks=0x2537:0x1066:u,0x2537:0x1068:u root@helios64:~# root@helios64:~# cat /etc/default/cpufrequtils ENABLE="true" GOVERNOR="ondemand" MAX_SPEED="0" MIN_SPEED="0" root@helios64:~# Im in the middle ZFS scrub . We will see if its complete without errors. Currently I saw one problem. Yesterday I ran screen with coping some data over NFS. Its failed with kernel error. Probably I will need to go back to my old cpufreq. GOVERNOR=performance MAX_SPEED=1416000 MIN_SPEED=1416000 Aug 10 17:48:21 helios64 kernel: [ 3612.621208] FS-Cache: Loaded Aug 10 17:48:22 helios64 kernel: [ 3612.787472] FS-Cache: Netfs 'nfs' registered for caching Aug 10 17:59:51 helios64 kernel: [ 4301.850819] Modules linked in: nfsv3 nfs fscache netfs dm_crypt dm_mod r8152 snd_soc_hdmi_codec leds_pwm gpio_charger pwm_fan panfrost snd_soc_rock chip_i2s gpu_sched snd_soc_rockchip_pcm sg snd_soc_core snd_pcm_dmaengine snd_pcm rockchip_vdec(C) hantro_vpu(C) snd_timer rockchip_iep v4l2_h264 snd rockchip_rga videobuf2_dma_contig videobuf2_vmalloc v4l2_mem2mem videobuf2_dma_sg videobuf2_memops fusb302 soundcore videobuf2_v4l2 tcpm videobuf2_common typec videodev mc gpio_beeper cpufreq_dt zfs(POE) zunicode(POE ) zzstd(OE) nfsd zlua(OE) zcommon(POE) znvpair(POE) zavl(POE) auth_rpcgss icp(POE) nfs_acl lockd spl(OE) grace softdog ledtrig_netdev lm75 sunrpc ip_tables x_tables autofs4 raid10 rai d456 async_raid6_recov async_memcpy async_pq async_xor async_tx raid1 raid0 multipath linear md_mod realtek dwmac_rk stmmac_platform stmmac pcs_xpcs adc_keys Aug 10 17:59:51 helios64 kernel: [ 4301.857423] CPU: 5 PID: 79841 Comm: mc Tainted: P C OE 5.15.59-rockchip64 #trunk Aug 10 17:59:51 helios64 kernel: [ 4301.858163] Hardware name: Helios64 (DT) Aug 10 17:59:51 helios64 kernel: [ 4301.858509] pstate: 40000005 (nZcv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--) Aug 10 17:59:51 helios64 kernel: [ 4301.859119] pc : nfs_generic_pg_pgios+0x60/0xc0 [nfs] Aug 10 17:59:51 helios64 kernel: [ 4301.859619] lr : nfs_generic_pg_pgios+0x5c/0xc0 [nfs] Aug 10 17:59:51 helios64 kernel: [ 4301.860096] sp : ffff80000e2b35c0 Aug 10 17:59:51 helios64 kernel: [ 4301.860388] x29: ffff80000e2b35c0 x28: ffff00006c52d700 x27: ffff00006c52d700 Aug 10 17:59:51 helios64 kernel: [ 4301.861018] x26: ffff80000e2b3880 x25: ffff80000954be68 x24: 0000000000000400 Aug 10 17:59:51 helios64 kernel: [ 4301.861646] x23: 0000000000000000 x22: ffff00005c2f8780 x21: ffff8000019fd370 Aug 10 17:59:51 helios64 kernel: [ 4301.862273] x20: ffff80000e2b3828 x19: ffff00009ff3a580 x18: 0000000000000001 Aug 10 17:59:51 helios64 kernel: [ 4301.862901] x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000002 Aug 10 17:59:51 helios64 kernel: [ 4301.863529] x14: 0000000000000080 x13: 000000000001b896 x12: ffff0000304cf9b8 Aug 10 17:59:51 helios64 kernel: [ 4301.864157] x11: ffff00009ff3a418 x10: ffff00009ff3a488 x9 : 0000000000000000 Aug 10 17:59:51 helios64 kernel: [ 4301.864784] x8 : ffff00009ff3a908 x7 : 0000000000000000 x6 : ffff00009ff3a590 Aug 10 17:59:51 helios64 kernel: [ 4301.865411] x5 : 0000000000000096 x4 : ffff00009ff3a6f8 x3 : 0000000000000800 Aug 10 17:59:51 helios64 kernel: [ 4301.866039] x2 : 50c2f8bdeb7ff200 x1 : 0000000000000000 x0 : 0000000000000000 Aug 10 17:59:51 helios64 kernel: [ 4301.866667] Call trace: Aug 10 17:59:51 helios64 kernel: [ 4301.866883] nfs_generic_pg_pgios+0x60/0xc0 [nfs] Aug 10 17:59:51 helios64 kernel: [ 4301.867331] nfs_pageio_doio+0x48/0xa0 [nfs] Aug 10 17:59:51 helios64 kernel: [ 4301.867739] __nfs_pageio_add_request+0x148/0x420 [nfs] Aug 10 17:59:51 helios64 kernel: [ 4301.868230] nfs_pageio_add_request_mirror+0x40/0x60 [nfs] Aug 10 17:59:51 helios64 kernel: [ 4301.868743] nfs_pageio_add_request+0x208/0x290 [nfs] Aug 10 17:59:51 helios64 kernel: [ 4301.869220] readpage_async_filler+0x20c/0x400 [nfs] Aug 10 17:59:51 helios64 kernel: [ 4301.869688] read_cache_pages+0xcc/0x198 Aug 10 17:59:51 helios64 kernel: [ 4301.870039] nfs_readpages+0x138/0x1b8 [nfs] Aug 10 17:59:51 helios64 kernel: [ 4301.870447] read_pages+0x188/0x288 Aug 10 17:59:51 helios64 kernel: [ 4301.870757] page_cache_ra_unbounded+0x148/0x248 Aug 10 17:59:51 helios64 kernel: [ 4301.871163] do_page_cache_ra+0x40/0x50 Aug 10 17:59:51 helios64 kernel: [ 4301.871501] ondemand_readahead+0x128/0x2c8 Aug 10 17:59:51 helios64 kernel: [ 4301.871870] page_cache_async_ra+0xd4/0xd8 Aug 10 17:59:51 helios64 kernel: [ 4301.872232] filemap_get_pages+0x1fc/0x568 Aug 10 17:59:51 helios64 kernel: [ 4301.872595] filemap_read+0xac/0x340 Aug 10 17:59:51 helios64 kernel: [ 4301.872911] generic_file_read_iter+0xf4/0x188 Aug 10 17:59:51 helios64 kernel: [ 4301.873304] nfs_file_read+0x9c/0x128 [nfs] Aug 10 17:59:51 helios64 kernel: [ 4301.873706] new_sync_read+0x104/0x180 Aug 10 17:59:51 helios64 kernel: [ 4301.874040] vfs_read+0x148/0x1e0 Aug 10 17:59:51 helios64 kernel: [ 4301.874334] ksys_read+0x68/0xf0 Aug 10 17:59:51 helios64 kernel: [ 4301.874622] __arm64_sys_read+0x1c/0x28 Aug 10 17:59:51 helios64 kernel: [ 4301.874960] invoke_syscall+0x44/0x108 Aug 10 17:59:51 helios64 kernel: [ 4301.875293] el0_svc_common.constprop.3+0x94/0xf8 Aug 10 17:59:51 helios64 kernel: [ 4301.875709] do_el0_svc+0x24/0x88 Aug 10 17:59:51 helios64 kernel: [ 4301.876003] el0_svc+0x20/0x50 Aug 10 17:59:51 helios64 kernel: [ 4301.876275] el0t_64_sync_handler+0x90/0xb8 Aug 10 17:59:51 helios64 kernel: [ 4301.876645] el0t_64_sync+0x180/0x184 Aug 10 17:59:51 helios64 kernel: [ 4301.877507] ---[ end trace 970b42649c1e42e3 ]--- Edit. After few hours , I saw the errors regarding my HDD. Aug 11 11:32:32 helios64 kernel: [67461.218137] ata1: hard resetting link Aug 11 11:32:32 helios64 kernel: [67461.694283] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Aug 11 11:32:32 helios64 kernel: [67461.696935] ata1.00: configured for UDMA/133 Aug 11 11:32:32 helios64 kernel: [67461.697672] sd 0:0:0:0: [sda] tag#19 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=DRIVER_OK cmd_age=0s Aug 11 11:32:32 helios64 kernel: [67461.698535] sd 0:0:0:0: [sda] tag#19 Sense Key : 0x5 [current] Aug 11 11:32:32 helios64 kernel: [67461.699057] sd 0:0:0:0: [sda] tag#19 ASC=0x21 ASCQ=0x4 Aug 11 11:32:32 helios64 kernel: [67461.699518] sd 0:0:0:0: [sda] tag#19 CDB: opcode=0x88 88 00 00 00 00 00 5c bd e5 50 00 00 08 00 00 00 Aug 11 11:32:32 helios64 kernel: [67461.701293] zio pool=data vdev=/dev/mapper/sda-crypt error=5 type=1 offset=796629049344 size=1048576 flags=40080cb0 Aug 11 11:32:32 helios64 kernel: [67461.702300] sd 0:0:0:0: [sda] tag#20 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=DRIVER_OK cmd_age=0s Aug 11 11:32:32 helios64 kernel: [67461.703147] sd 0:0:0:0: [sda] tag#20 Sense Key : 0x5 [current] Aug 11 11:32:32 helios64 kernel: [67461.703668] sd 0:0:0:0: [sda] tag#20 ASC=0x21 ASCQ=0x4 Aug 11 11:32:32 helios64 kernel: [67461.704127] sd 0:0:0:0: [sda] tag#20 CDB: opcode=0x88 88 00 00 00 00 00 5c bd ed 50 00 00 08 00 00 00 Aug 11 11:32:32 helios64 kernel: [67461.705890] zio pool=data vdev=/dev/mapper/sda-crypt error=5 type=1 offset=796630097920 size=1048576 flags=40080cb0 Aug 11 11:32:32 helios64 kernel: [67461.706862] sd 0:0:0:0: [sda] tag#28 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=DRIVER_OK cmd_age=0s Aug 11 11:32:32 helios64 kernel: [67461.707709] sd 0:0:0:0: [sda] tag#28 Sense Key : 0x5 [current] Aug 11 11:32:32 helios64 kernel: [67461.708228] sd 0:0:0:0: [sda] tag#28 ASC=0x21 ASCQ=0x4 Aug 11 11:32:32 helios64 kernel: [67461.708688] sd 0:0:0:0: [sda] tag#28 CDB: opcode=0x88 88 00 00 00 00 00 5c bd f5 50 00 00 08 00 00 00 Aug 11 11:32:32 helios64 kernel: [67461.710466] zio pool=data vdev=/dev/mapper/sda-crypt error=5 type=1 offset=796631146496 size=1048576 flags=40080cb0 Aug 11 11:32:32 helios64 kernel: [67461.711410] ata1: EH complete So I changed all my setting to the old one ( before the update) GOVERNOR=performance MAX_SPEED=1416000 MIN_SPEED=1416000 and root@helios64:~# cat /boot/armbianEnv.txt verbosity=7 bootlogo=false console=serial extraargs=earlyprintk ignore_loglevel libata.force=3.0 we will see ,whats next. Rebooted system with those settings . ZFS scrub resumed.
  4. grek

    Extend Helios64 storage.

    missunderstanding I want to create NEW ZFS pool from disks installed at external enclosure. I'm not sure if disks will be correctly exposed at OS level ....and if will work at all ....
  5. Hey guys. I have helios64 with 5x WD14tb red pro.... and ZFS fs. but currently im out of free space I want to extend somehow my storage pool. I'm wonder if it's possible to buy additional storage enclosure like from QNAP ( TL-D800C) - JBOD 1 x USB 3.2 Gen 2 type C and put additional 8x HDD I wanted to create additional ZFS pool raidz1 Do You think it will work?
  6. Thanks @ShadowDance for the tip, my scub test done without errors
  7. Today I manually ran zfs scrub at my zfs mirror pool, I got similar errors. The funny is, that normally ,scrub was initiated by the cron without any issue. Any suggestion what should I do? I done: zpool clear data sda-crypt and zpool clear data sdb-crypt ,at my pools and now I have online state, but I'm little afraid about my data ...... output before today manual zfs scrub - Scrub without errors at Sun Mar 14...... root@helios64:/# zpool status -v data pool: data state: ONLINE scan: scrub repaired 0B in 06:17:38 with 0 errors on Sun Mar 14 06:41:40 2021 config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sda-crypt ONLINE 0 0 0 sdb-crypt ONLINE 0 0 0 cache sdc2 ONLINE 0 0 0 root@helios64:/var/log# cat /etc/cron.d/zfsutils-linux PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # TRIM the first Sunday of every month. 24 0 1-7 * * root if [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/trim ]; then /usr/lib/zfs-linux/trim; fi # Scrub the second Sunday of every month. 24 0 8-14 * * root if [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/scrub ]; then /usr/lib/zfs-linux/scrub; fi root@helios64:/var/log#
  8. I have tried to changed cpu governor to : ondemand root@helios64:/# cat /etc/default/cpufrequtils ENABLE="true" GOVERNOR=ondemand MAX_SPEED=1800000 MIN_SPEED=408000 but after 5min, my server got rebooted . Previously I used : cat /etc/default/cpufrequtils ENABLE="true" GOVERNOR=performance MAX_SPEED=1416000 MIN_SPEED=1416000 for last 3 kernel updates without any issue. Uptime's was mostly about 2-3 weeks without kernel panic or reboots Unfortunately I don't have nothing on my logs. I have changed verbose mode to 7 ,and Im logging to the file my console. I will update when I get error. I have ZFS mirror ( LUKS on the top) + ZFS cache Also I din't changed the disks scheduler. I have bfq , but it was worked since last upgrade without any problem. currently I ran zfs scrub ... we will see. [update 1] : root@helios64:/# uptime 11:02:40 up 2:28, 1 user, load average: 4.80, 4.81, 4.60] scrub is still in progress ,but I got few read errors. I'm not sure if it problem with HDD connector, Hdd scheduler , or cpu freq.... I didn't have it before. [update 2]: Server has been rebooted ..... last 'tasks' what i remember was generating thumbnails by nextcloud . Unfortunately nothing in the console (it was connected all the time ) with: root@helios64:~# cat /boot/armbianEnv.txt verbosity=7 console=serial extraargs=earlyprintk ignore_loglevel failed command: READ FPDMA QUEUED , errors comes from zfs scrum few hours ealier
  9. Hey, I have Mirror ZFS, with LUKS encryption on the top + SSD Cache root@helios64:~# zpool status pool: data state: ONLINE scan: scrub repaired 0B in 05:27:58 with 0 errors on Sun Feb 14 20:06:49 2021 config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sda-crypt ONLINE 0 0 0 sdb-crypt ONLINE 0 0 0 cache sdc2 ONLINE 0 0 0 Here You have my results.. it looks like it is faster to have LUKS at the top ...
  10. grek

    ZFS on Helios64

    Thx @FloBaoti I had to remove openmediavault-zfs first.... root@helios64:/# zfs --version zfs-2.0.1-1 zfs-kmod-2.0.1-1 root@helios64:/# uname -a Linux helios64 5.10.12-rockchip64 #21.02.1 SMP PREEMPT Wed Feb 3 20:55:02 CET 2021 aarch64 GNU/Linux root@helios64:/# Now i have working ZFS again
  11. grek

    ZFS on Helios64

    It looks like the latest update Linux helios64 5.10.12-rockchip64 #21.02.1 broke ZFS dkms After the reboot zfs kernel module is missing... Probably I have to build it manually again
  12. grek

    ZFS on Helios64

    @tionebrr I have built it from this script yesterday without any issue. I only changed a line with make deb to make deb-kmod to create only kmod package (it was needed for me after kernel upgrade) cd "/scratch/$zfsver-$(uname -r)" sh autogen.sh ./configure make -s -j$(nproc) make deb-kmod mkdir "/scratch/deb-$zfsver-$(uname -r)" cp *.deb "/scratch/deb-$zfsver-$(uname -r)" rm -rf "/scratch/$zfsver-$(uname -r)" exit EOF If you have a trouble pls pm me, i can send you this deb package. (mine is: kmod-zfs-5.9.14-rockchip64_2.0.0-1_arm64.deb ) , I have changed zfsver git to 2.0.0 instead of zfsver="zfs-2.0.0-rc6" Also i hashed all cleanup section, and i installed ZFS from official repo and im using only kmod installed manually. In other way I had a problem with OMV :/ root@helios64:~# zfs version zfs-0.8.5-3~bpo10+1 zfs-kmod-2.0.0-1 root@helios64:~# root@helios64:~# dpkg -l |grep zfs ii kmod-zfs-5.9.10-rockchip64 2.0.0-0 arm64 zfs kernel module(s) for 5.9.10-rockchip64 ii kmod-zfs-5.9.11-rockchip64 2.0.0-1 arm64 zfs kernel module(s) for 5.9.11-rockchip64 ii kmod-zfs-5.9.14-rockchip64 2.0.0-1 arm64 zfs kernel module(s) for 5.9.14-rockchip64 ii libzfs2linux 0.8.5-3~bpo10+1 arm64 OpenZFS filesystem library for Linux ii openmediavault-zfs 5.0.5 arm64 OpenMediaVault plugin for ZFS ii zfs-auto-snapshot 1.2.4-2 all ZFS automatic snapshot service iF zfs-dkms 0.8.5-3~bpo10+1 all OpenZFS filesystem kernel modules for Linux ii zfs-dkms-dummy 0.8.4 all Dummy zfs-dkms package for when using built kmod ii zfs-zed 0.8.5-3~bpo10+1 arm64 OpenZFS Event Daemon ii zfsutils-linux 0.8.5-3~bpo10+1 arm64 command-line tools to manage OpenZFS filesystems root@helios64:~# @ShadowDance I forgot to mention that on the top i have full encryption with LUKS. like You yesterday i upgraded OS to Armbian 20.11.3 and kernel to Linux helios64 5.9.14-rockchip64 I posted what version I have. I wanted to have ZFS with OMV and this solution is somehow working for me. If i used all packages from manual build i wasn't force omv zfs plugin to work :/ My swap is off I will change something in the settings if I will face the problem again. currently i finished copy my data and everything is working as desired.
  13. to install OMV You should login via SSH to your helios then: root@helios64:~# armbian-config root@helios64:~# armbian-config Software -> Softy -> OMV You can verify if it installed: root@helios64:~# dpkg -l |grep openmediavault ii openmediavault 5.5.17-3 all openmediavault - The open network attached storage solution ii openmediavault-fail2ban 5.0.5 all OpenMediaVault Fail2ban plugin ii openmediavault-flashmemory 5.0.7 all folder2ram plugin for OpenMediaVault ii openmediavault-keyring 1.0 all GnuPG archive keys of the OpenMediaVault archive ii openmediavault-luksencryption 5.0.2 all OpenMediaVault LUKS encryption plugin ii openmediavault-omvextrasorg 5.4.2 all OMV-Extras.org Package Repositories for OpenMediaVault ii openmediavault-zfs 5.0.5 arm64 OpenMediaVault plugin for ZFS root@helios64:~#
  14. grek

    ZFS on Helios64

    Hey, Do You using compression on Your ZFS datastore ? Yesterday I wanted to do some cleanup (Previously i have only pool created and all directories were there) ,I created new datastore for each 'directory' like home, docker-data etc. Also I setup them with ACL support and compression: zfs set aclinherit=passthrough data/docker-data zfs set acltype=posixacl data/docker-data zfs set xattr=sa data/docker-data zfs set compression=lz4 data/docker-data I successfully moved 3 directories (about 300 GB each) from main 'data' datastore to 'data'/something . The load was quite big and write speed about 40 - 55 MB/s . ( ZFS Mirror - 2x 4TB hdd ) When I tried with another (directory to new datastore) I got kernel panic. I faced this problem twice yesterday. Finally I disabled compression on the rest datastories kernel: 5.9.11 Dec 14 22:46:36 localhost kernel: [ 469.592940] Modules linked in: quota_v2 quota_tree dm_crypt algif_skcipher af_alg dm_mod softdog governor_performance r8152 snd_soc_hdmi_codec pwm_fan gpio_charger leds_pwm snd_soc_rockchip_i2s snd_soc_core rockchip_vdec(C) snd_pcm_dmaengine snd_pcm hantro_vpu(C) rockchip_rga v4l2_h264 snd_timer fusb302 videobuf2_dma_contig rockchipdrm snd videobuf2_dma_sg dw_mipi_dsi v4l2_mem2mem videobuf2_vmalloc dw_hdmi soundcore videobuf2_memops tcpm analogix_dp panfrost videobuf2_v4l2 gpu_sched typec videobuf2_common drm_kms_helper cec videodev rc_core mc sg drm drm_panel_orientation_quirks cpufreq_dt gpio_beeper zfs(POE) zunicode(POE) zzstd(OE) zlua(OE) nfsd auth_rpcgss nfs_acl lockd grace zcommon(POE) znvpair(POE) zavl(POE) icp(POE) spl(OE) sunrpc lm75 ip_tables x_tables autofs4 raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx raid1 raid0 multipath linear md_mod realtek dwmac_rk stmmac_platform stmmac mdio_xpcs adc_keys Dec 14 22:46:36 localhost kernel: [ 469.600200] CPU: 5 PID: 5769 Comm: z_wr_iss Tainted: P C OE 5.9.11-rockchip64 #20.11.1 Dec 14 22:46:36 localhost kernel: [ 469.600982] Hardware name: Helios64 (DT) Dec 14 22:46:36 localhost kernel: [ 469.601328] pstate: 20000005 (nzCv daif -PAN -UAO BTYPE=--) Dec 14 22:46:36 localhost kernel: [ 469.601942] pc : lz4_compress_zfs+0xfc/0x7d0 [zfs] Dec 14 22:46:36 localhost kernel: [ 469.602367] lr : 0x11be8 Dec 14 22:46:36 localhost kernel: [ 469.602591] sp : ffff80001434bbd0 Dec 14 22:46:36 localhost kernel: [ 469.602882] x29: ffff80001434bbd0 x28: ffff0000af99cf80 Dec 14 22:46:36 localhost kernel: [ 469.603349] x27: 00000000000000ff x26: ffff000064adaca8 Dec 14 22:46:36 localhost kernel: [ 469.603815] x25: ffff800022378be8 x24: ffff800012855004 Dec 14 22:46:36 localhost kernel: [ 469.604280] x23: ffff0000a04c4000 x22: ffff8000095c0000 Dec 14 22:46:36 localhost kernel: [ 469.604746] x21: ffff800012855000 x20: 0000000000020000 Dec 14 22:46:36 localhost kernel: [ 469.605212] x19: ffff800022367000 x18: 000000005b5eaa7e Dec 14 22:46:36 localhost kernel: [ 469.605677] x17: 00000000fffffff0 x16: ffff800022386ff8 Dec 14 22:46:36 localhost kernel: [ 469.606143] x15: ffff800022386ffa x14: ffff800022386ffb Dec 14 22:46:36 localhost kernel: [ 469.606609] x13: 00000000ffffffff x12: ffffffffffff0001 Dec 14 22:46:36 localhost kernel: [ 469.607075] x11: ffff800012867a81 x10: 000000009e3779b1 Dec 14 22:46:36 localhost kernel: [ 469.607540] x9 : ffff80002236a2f2 x8 : ffff80002237a2f9 Dec 14 22:46:36 localhost kernel: [ 469.608006] x7 : ffff800012871000 x6 : ffff800022387000 Dec 14 22:46:36 localhost kernel: [ 469.608472] x5 : ffff800022386ff4 x4 : ffff80002237a301 Dec 14 22:46:36 localhost kernel: [ 469.608937] x3 : 0000000000000239 x2 : ffff80002237a2f1 Dec 14 22:46:36 localhost kernel: [ 469.609402] x1 : ffff800022379a3b x0 : 00000000000005b5 Dec 14 22:46:36 localhost kernel: [ 469.609870] Call trace: Dec 14 22:46:36 localhost kernel: [ 469.610234] lz4_compress_zfs+0xfc/0x7d0 [zfs] Dec 14 22:46:36 localhost kernel: [ 469.610737] zio_compress_data+0xc0/0x13c [zfs] Dec 14 22:46:36 localhost kernel: [ 469.611243] zio_write_compress+0x294/0x6d0 [zfs] Dec 14 22:46:36 localhost kernel: [ 469.611764] zio_execute+0xa4/0x144 [zfs] Dec 14 22:46:36 localhost kernel: [ 469.612137] taskq_thread+0x278/0x450 [spl] Dec 14 22:46:36 localhost kernel: [ 469.612517] kthread+0x118/0x150 Dec 14 22:46:36 localhost kernel: [ 469.612804] ret_from_fork+0x10/0x34 Dec 14 22:46:36 localhost kernel: [ 469.613660] ---[ end trace 34e3e71ff3a20382 ]---
×
×
  • Create New...