grek

  • Posts

    13
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

grek's Achievements

  1. Thanks @ShadowDance for the tip, my scub test done without errors
  2. Today I manually ran zfs scrub at my zfs mirror pool, I got similar errors. The funny is, that normally ,scrub was initiated by the cron without any issue. Any suggestion what should I do? I done: zpool clear data sda-crypt and zpool clear data sdb-crypt ,at my pools and now I have online state, but I'm little afraid about my data ...... output before today manual zfs scrub - Scrub without errors at Sun Mar 14...... root@helios64:/# zpool status -v data pool: data state: ONLINE scan: scrub repaired 0B in 06:17:38 with 0 errors on Sun Mar 14 06:41:40 2021 config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sda-crypt ONLINE 0 0 0 sdb-crypt ONLINE 0 0 0 cache sdc2 ONLINE 0 0 0 root@helios64:/var/log# cat /etc/cron.d/zfsutils-linux PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # TRIM the first Sunday of every month. 24 0 1-7 * * root if [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/trim ]; then /usr/lib/zfs-linux/trim; fi # Scrub the second Sunday of every month. 24 0 8-14 * * root if [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/scrub ]; then /usr/lib/zfs-linux/scrub; fi root@helios64:/var/log#
  3. I have tried to changed cpu governor to : ondemand root@helios64:/# cat /etc/default/cpufrequtils ENABLE="true" GOVERNOR=ondemand MAX_SPEED=1800000 MIN_SPEED=408000 but after 5min, my server got rebooted . Previously I used : cat /etc/default/cpufrequtils ENABLE="true" GOVERNOR=performance MAX_SPEED=1416000 MIN_SPEED=1416000 for last 3 kernel updates without any issue. Uptime's was mostly about 2-3 weeks without kernel panic or reboots Unfortunately I don't have nothing on my logs. I have changed verbose mode to 7 ,and Im logging to the file my console. I will update when I get error. I have ZFS mirror ( LUKS on the top) + ZFS cache Also I din't changed the disks scheduler. I have bfq , but it was worked since last upgrade without any problem. currently I ran zfs scrub ... we will see. [update 1] : root@helios64:/# uptime 11:02:40 up 2:28, 1 user, load average: 4.80, 4.81, 4.60] scrub is still in progress ,but I got few read errors. I'm not sure if it problem with HDD connector, Hdd scheduler , or cpu freq.... I didn't have it before. [update 2]: Server has been rebooted ..... last 'tasks' what i remember was generating thumbnails by nextcloud . Unfortunately nothing in the console (it was connected all the time ) with: root@helios64:~# cat /boot/armbianEnv.txt verbosity=7 console=serial extraargs=earlyprintk ignore_loglevel failed command: READ FPDMA QUEUED , errors comes from zfs scrum few hours ealier
  4. Hey, I have Mirror ZFS, with LUKS encryption on the top + SSD Cache root@helios64:~# zpool status pool: data state: ONLINE scan: scrub repaired 0B in 05:27:58 with 0 errors on Sun Feb 14 20:06:49 2021 config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sda-crypt ONLINE 0 0 0 sdb-crypt ONLINE 0 0 0 cache sdc2 ONLINE 0 0 0 Here You have my results.. it looks like it is faster to have LUKS at the top ...
  5. Thx @FloBaoti I had to remove openmediavault-zfs first.... root@helios64:/# zfs --version zfs-2.0.1-1 zfs-kmod-2.0.1-1 root@helios64:/# uname -a Linux helios64 5.10.12-rockchip64 #21.02.1 SMP PREEMPT Wed Feb 3 20:55:02 CET 2021 aarch64 GNU/Linux root@helios64:/# Now i have working ZFS again
  6. It looks like the latest update Linux helios64 5.10.12-rockchip64 #21.02.1 broke ZFS dkms After the reboot zfs kernel module is missing... Probably I have to build it manually again
  7. @tionebrr I have built it from this script yesterday without any issue. I only changed a line with make deb to make deb-kmod to create only kmod package (it was needed for me after kernel upgrade) cd "/scratch/$zfsver-$(uname -r)" sh autogen.sh ./configure make -s -j$(nproc) make deb-kmod mkdir "/scratch/deb-$zfsver-$(uname -r)" cp *.deb "/scratch/deb-$zfsver-$(uname -r)" rm -rf "/scratch/$zfsver-$(uname -r)" exit EOF If you have a trouble pls pm me, i can send you this deb package. (mine is: kmod-zfs-5.9.14-rockchip64_2.0.0-1_arm64.deb ) , I have changed zfsver git to 2.0.0 instead of zfsver="zfs-2.0.0-rc6" Also i hashed all cleanup section, and i installed ZFS from official repo and im using only kmod installed manually. In other way I had a problem with OMV :/ root@helios64:~# zfs version zfs-0.8.5-3~bpo10+1 zfs-kmod-2.0.0-1 root@helios64:~# root@helios64:~# dpkg -l |grep zfs ii kmod-zfs-5.9.10-rockchip64 2.0.0-0 arm64 zfs kernel module(s) for 5.9.10-rockchip64 ii kmod-zfs-5.9.11-rockchip64 2.0.0-1 arm64 zfs kernel module(s) for 5.9.11-rockchip64 ii kmod-zfs-5.9.14-rockchip64 2.0.0-1 arm64 zfs kernel module(s) for 5.9.14-rockchip64 ii libzfs2linux 0.8.5-3~bpo10+1 arm64 OpenZFS filesystem library for Linux ii openmediavault-zfs 5.0.5 arm64 OpenMediaVault plugin for ZFS ii zfs-auto-snapshot 1.2.4-2 all ZFS automatic snapshot service iF zfs-dkms 0.8.5-3~bpo10+1 all OpenZFS filesystem kernel modules for Linux ii zfs-dkms-dummy 0.8.4 all Dummy zfs-dkms package for when using built kmod ii zfs-zed 0.8.5-3~bpo10+1 arm64 OpenZFS Event Daemon ii zfsutils-linux 0.8.5-3~bpo10+1 arm64 command-line tools to manage OpenZFS filesystems root@helios64:~# @ShadowDance I forgot to mention that on the top i have full encryption with LUKS. like You yesterday i upgraded OS to Armbian 20.11.3 and kernel to Linux helios64 5.9.14-rockchip64 I posted what version I have. I wanted to have ZFS with OMV and this solution is somehow working for me. If i used all packages from manual build i wasn't force omv zfs plugin to work :/ My swap is off I will change something in the settings if I will face the problem again. currently i finished copy my data and everything is working as desired.
  8. to install OMV You should login via SSH to your helios then: root@helios64:~# armbian-config root@helios64:~# armbian-config Software -> Softy -> OMV You can verify if it installed: root@helios64:~# dpkg -l |grep openmediavault ii openmediavault 5.5.17-3 all openmediavault - The open network attached storage solution ii openmediavault-fail2ban 5.0.5 all OpenMediaVault Fail2ban plugin ii openmediavault-flashmemory 5.0.7 all folder2ram plugin for OpenMediaVault ii openmediavault-keyring 1.0 all GnuPG archive keys of the OpenMediaVault archive ii openmediavault-luksencryption 5.0.2 all OpenMediaVault LUKS encryption plugin ii openmediavault-omvextrasorg 5.4.2 all OMV-Extras.org Package Repositories for OpenMediaVault ii openmediavault-zfs 5.0.5 arm64 OpenMediaVault plugin for ZFS root@helios64:~#
  9. Hey, Do You using compression on Your ZFS datastore ? Yesterday I wanted to do some cleanup (Previously i have only pool created and all directories were there) ,I created new datastore for each 'directory' like home, docker-data etc. Also I setup them with ACL support and compression: zfs set aclinherit=passthrough data/docker-data zfs set acltype=posixacl data/docker-data zfs set xattr=sa data/docker-data zfs set compression=lz4 data/docker-data I successfully moved 3 directories (about 300 GB each) from main 'data' datastore to 'data'/something . The load was quite big and write speed about 40 - 55 MB/s . ( ZFS Mirror - 2x 4TB hdd ) When I tried with another (directory to new datastore) I got kernel panic. I faced this problem twice yesterday. Finally I disabled compression on the rest datastories kernel: 5.9.11 Dec 14 22:46:36 localhost kernel: [ 469.592940] Modules linked in: quota_v2 quota_tree dm_crypt algif_skcipher af_alg dm_mod softdog governor_performance r8152 snd_soc_hdmi_codec pwm_fan gpio_charger leds_pwm snd_soc_rockchip_i2s snd_soc_core rockchip_vdec(C) snd_pcm_dmaengine snd_pcm hantro_vpu(C) rockchip_rga v4l2_h264 snd_timer fusb302 videobuf2_dma_contig rockchipdrm snd videobuf2_dma_sg dw_mipi_dsi v4l2_mem2mem videobuf2_vmalloc dw_hdmi soundcore videobuf2_memops tcpm analogix_dp panfrost videobuf2_v4l2 gpu_sched typec videobuf2_common drm_kms_helper cec videodev rc_core mc sg drm drm_panel_orientation_quirks cpufreq_dt gpio_beeper zfs(POE) zunicode(POE) zzstd(OE) zlua(OE) nfsd auth_rpcgss nfs_acl lockd grace zcommon(POE) znvpair(POE) zavl(POE) icp(POE) spl(OE) sunrpc lm75 ip_tables x_tables autofs4 raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx raid1 raid0 multipath linear md_mod realtek dwmac_rk stmmac_platform stmmac mdio_xpcs adc_keys Dec 14 22:46:36 localhost kernel: [ 469.600200] CPU: 5 PID: 5769 Comm: z_wr_iss Tainted: P C OE 5.9.11-rockchip64 #20.11.1 Dec 14 22:46:36 localhost kernel: [ 469.600982] Hardware name: Helios64 (DT) Dec 14 22:46:36 localhost kernel: [ 469.601328] pstate: 20000005 (nzCv daif -PAN -UAO BTYPE=--) Dec 14 22:46:36 localhost kernel: [ 469.601942] pc : lz4_compress_zfs+0xfc/0x7d0 [zfs] Dec 14 22:46:36 localhost kernel: [ 469.602367] lr : 0x11be8 Dec 14 22:46:36 localhost kernel: [ 469.602591] sp : ffff80001434bbd0 Dec 14 22:46:36 localhost kernel: [ 469.602882] x29: ffff80001434bbd0 x28: ffff0000af99cf80 Dec 14 22:46:36 localhost kernel: [ 469.603349] x27: 00000000000000ff x26: ffff000064adaca8 Dec 14 22:46:36 localhost kernel: [ 469.603815] x25: ffff800022378be8 x24: ffff800012855004 Dec 14 22:46:36 localhost kernel: [ 469.604280] x23: ffff0000a04c4000 x22: ffff8000095c0000 Dec 14 22:46:36 localhost kernel: [ 469.604746] x21: ffff800012855000 x20: 0000000000020000 Dec 14 22:46:36 localhost kernel: [ 469.605212] x19: ffff800022367000 x18: 000000005b5eaa7e Dec 14 22:46:36 localhost kernel: [ 469.605677] x17: 00000000fffffff0 x16: ffff800022386ff8 Dec 14 22:46:36 localhost kernel: [ 469.606143] x15: ffff800022386ffa x14: ffff800022386ffb Dec 14 22:46:36 localhost kernel: [ 469.606609] x13: 00000000ffffffff x12: ffffffffffff0001 Dec 14 22:46:36 localhost kernel: [ 469.607075] x11: ffff800012867a81 x10: 000000009e3779b1 Dec 14 22:46:36 localhost kernel: [ 469.607540] x9 : ffff80002236a2f2 x8 : ffff80002237a2f9 Dec 14 22:46:36 localhost kernel: [ 469.608006] x7 : ffff800012871000 x6 : ffff800022387000 Dec 14 22:46:36 localhost kernel: [ 469.608472] x5 : ffff800022386ff4 x4 : ffff80002237a301 Dec 14 22:46:36 localhost kernel: [ 469.608937] x3 : 0000000000000239 x2 : ffff80002237a2f1 Dec 14 22:46:36 localhost kernel: [ 469.609402] x1 : ffff800022379a3b x0 : 00000000000005b5 Dec 14 22:46:36 localhost kernel: [ 469.609870] Call trace: Dec 14 22:46:36 localhost kernel: [ 469.610234] lz4_compress_zfs+0xfc/0x7d0 [zfs] Dec 14 22:46:36 localhost kernel: [ 469.610737] zio_compress_data+0xc0/0x13c [zfs] Dec 14 22:46:36 localhost kernel: [ 469.611243] zio_write_compress+0x294/0x6d0 [zfs] Dec 14 22:46:36 localhost kernel: [ 469.611764] zio_execute+0xa4/0x144 [zfs] Dec 14 22:46:36 localhost kernel: [ 469.612137] taskq_thread+0x278/0x450 [spl] Dec 14 22:46:36 localhost kernel: [ 469.612517] kthread+0x118/0x150 Dec 14 22:46:36 localhost kernel: [ 469.612804] ret_from_fork+0x10/0x34 Dec 14 22:46:36 localhost kernel: [ 469.613660] ---[ end trace 34e3e71ff3a20382 ]---
  10. Did You tried to jump armbian 20.11 and kernel 5.9. 10. ? Currently Im using 5.9.10 begin friday without any issues. Before it 5.9.9? and i successfully copied more than 2 TB data. (zfs - raid1) Also i ran docker and compiled few times ZFS from sources . The load was big but i don't see any issues.
  11. Hey, I think a dedicated topic for ZFS on Helios 64 is needed. Many people want to have it As we know, and many of us playing with dev builds for helios ,it is no easy to work with ZFS I wrote few scripts maybe someone of You it can help in someway. Thanks @jbergler and @ShadowDance , I used your idea for complete it. The problem is with playing with OMV and ZFS plugin. then dependencies remove our latest version of ZFS. I tested. everything survived but ZFS is downgraded to latest version from repo for example: root@helios64:~# zfs --version zfs-0.8.4-2~bpo10+1 zfs-kmod-2.0.0-rc6 root@helios64:~# uname -a Linux helios64 5.9.11-rockchip64 #trunk.1 SMP PREEMPT Thu Nov 26 01:32:45 CET 2020 aarch64 GNU/Linux root@helios64:~# I tested it with kernel 5.9.10 and with today clean install with 5.9.11 First we need to have docker installed ( armbian-config -> software -> softy -> docker ) Create dedicated directory with Dockerfile ( we will customize ubuntu:bionic image with required libraries and gcc 10 . in next builds we can skip this step and just customize and run build-zfs.sh script) mkdir zfs-builder cd zfs-builder vi Dockerfile FROM ubuntu:bionic RUN apt update; apt install build-essential autoconf automake bison flex libtool gawk alien fakeroot dkms libblkid-dev uuid-dev libudev-dev libssl-dev zlib1g-dev libaio-dev libattr1-dev libelf-dev python3 python3-dev python3-setuptools python3-cffi libffi-dev -y; apt install software-properties-common -y; add-apt-repository ppa:ubuntu-toolchain-r/test; apt install gcc-10 g++-10 -y; update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-10 10; update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-10 10 Build docker image for building purposes. docker build --tag zfs-build-ubuntu-bionic:0.1 . Create script for ZFS build vi build-zfs.sh #!/bin/bash #define zfs version zfsver="zfs-2.0.0-rc6" #creating building directory mkdir /tmp/zfs-builds && cd "$_" rm -rf /tmp/zfs-builds/inside_zfs.sh apt-get download linux-headers-current-rockchip64 git clone -b $zfsver https://github.com/openzfs/zfs.git $zfsver-$(uname -r) #create file to execute inside container echo "creating file to execute it inside container" cat > /tmp/zfs-builds/inside_zfs.sh <<EOF #!/bin/bash cd scratch/ dpkg -i linux-headers-current-*.deb zfsver="zfs-2.0.0-rc6" cd "/scratch/$zfsver-$(uname -r)" sh autogen.sh ./configure make -s -j$(nproc) make deb mkdir "/scratch/deb-$zfsver-$(uname -r)" cp *.deb "/scratch/deb-$zfsver-$(uname -r)" rm -rf "/scratch/$zfsver-$(uname -r)" exit EOF chmod +x /tmp/zfs-builds/inside_zfs.sh echo "" echo "####################" echo "starting container.." echo "####################" echo "" docker run --rm -it -v /tmp/zfs-builds:/scratch zfs-build-ubuntu-bionic:0.1 /bin/bash /scratch/inside_zfs.sh # Cleanup packages (if installed). modprobe -r zfs zunicode zzstd zlua zcommon znvpair zavl icp spl apt remove --yes zfsutils-linux zfs-zed zfs-initramfs apt autoremove --yes dpkg -i "/tmp/zfs-builds/deb-$zfsver-$(uname -r)"/kmod-zfs-$(uname -r)*.deb dpkg -i "/tmp/zfs-builds/deb-$zfsver-$(uname -r)"/{libnvpair1,libuutil1,libzfs2,libzpool2,python3-pyzfs,zfs}_*.deb echo "" echo "###################" echo "building complete" echo "###################" echo "" chmod +x build-zfs.sh screen -L -Logfile buildlog.txt ./build-zfs-final.sh
  12. Hey I was able to build ZFS zfs-2.0.0-rc6 from sources using @jbergler instructions on my helios64 . debian (inside docker ubuntu container) root@helios64:~# uname -a Linux helios64 5.9.10-rockchip64 #trunk.39 SMP PREEMPT Sun Nov 22 10:36:59 CET 2020 aarch64 GNU/Linux root@helios64:~# but now im stuck with old glibc version. root@helios64:~/zfs_deb# zfs zfs: /lib/aarch64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /lib/aarch64-linux-gnu/libzfs.so.4) root@helios64:~/zfs_deb# zfs --version zfs: /lib/aarch64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /lib/aarch64-linux-gnu/libzfs.so.4) root@helios64:~/zfs_deb# root@helios64:~/zfs_deb# ldd --version ldd (Debian GLIBC 2.28-10) 2.28 Copyright (C) 2018 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Written by Roland McGrath and Ulrich Drepper. I also installed kmod-zfs instead of zfs-dkms_2.0.0-0_arm64.deb package. When I tried to install zfs-dkms i got errors: Setting up libnvpair1 (2.0.0-0) ... Setting up libuutil1 (2.0.0-0) ... Setting up libzfs2-devel (2.0.0-0) ... Setting up libzfs2 (2.0.0-0) ... Setting up libzpool2 (2.0.0-0) ... Setting up python3-pyzfs (2.0.0-0) ... Setting up zfs-dkms (2.0.0-0) ... WARNING: /usr/lib/dkms/common.postinst does not exist. ERROR: DKMS version is too old and zfs was not built with legacy DKMS support. You must either rebuild zfs with legacy postinst support or upgrade DKMS to a more current version. dpkg: error processing package zfs-dkms (--install): installed zfs-dkms package post-installation script subprocess returned error exit status 1 Setting up zfs-dracut (2.0.0-0) ... Setting up zfs-initramfs (2.0.0-0) ... Setting up zfs-test (2.0.0-0) ... Setting up zfs (2.0.0-0) ... Processing triggers for libc-bin (2.28-10) ... Processing triggers for systemd (241-7~deb10u4) ... Processing triggers for man-db (2.8.5-2) ... Errors were encountered while processing: zfs-dkms