grek

  • Content Count

    13
  • Joined

  • Last visited

About grek

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thanks @ShadowDance for the tip, my scub test done without errors
  2. Today I manually ran zfs scrub at my zfs mirror pool, I got similar errors. The funny is, that normally ,scrub was initiated by the cron without any issue. Any suggestion what should I do? I done: zpool clear data sda-crypt and zpool clear data sdb-crypt ,at my pools and now I have online state, but I'm little afraid about my data ...... output before today manual zfs scrub - Scrub without errors at Sun Mar 14...... root@helios64:/# zpool status -v data pool: data state: ONLINE scan: scrub repaired 0B in 06:17:38 with 0 errors on
  3. I have tried to changed cpu governor to : ondemand root@helios64:/# cat /etc/default/cpufrequtils ENABLE="true" GOVERNOR=ondemand MAX_SPEED=1800000 MIN_SPEED=408000 but after 5min, my server got rebooted . Previously I used : cat /etc/default/cpufrequtils ENABLE="true" GOVERNOR=performance MAX_SPEED=1416000 MIN_SPEED=1416000 for last 3 kernel updates without any issue. Uptime's was mostly about 2-3 weeks without kernel panic or reboots Unfortunately I don't have nothing on my logs. I have changed verbose mode to 7 ,and Im logging to the file my co
  4. Hey, I have Mirror ZFS, with LUKS encryption on the top + SSD Cache root@helios64:~# zpool status pool: data state: ONLINE scan: scrub repaired 0B in 05:27:58 with 0 errors on Sun Feb 14 20:06:49 2021 config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sda-crypt ONLINE 0 0 0 sdb-crypt ONLINE 0 0 0 cache sdc2 ONLINE 0 0 0 Here You have my results.. it looks like it is faster to have LUKS at the top ...
  5. Thx @FloBaoti I had to remove openmediavault-zfs first.... root@helios64:/# zfs --version zfs-2.0.1-1 zfs-kmod-2.0.1-1 root@helios64:/# uname -a Linux helios64 5.10.12-rockchip64 #21.02.1 SMP PREEMPT Wed Feb 3 20:55:02 CET 2021 aarch64 GNU/Linux root@helios64:/# Now i have working ZFS again
  6. It looks like the latest update Linux helios64 5.10.12-rockchip64 #21.02.1 broke ZFS dkms After the reboot zfs kernel module is missing... Probably I have to build it manually again
  7. @tionebrr I have built it from this script yesterday without any issue. I only changed a line with make deb to make deb-kmod to create only kmod package (it was needed for me after kernel upgrade) cd "/scratch/$zfsver-$(uname -r)" sh autogen.sh ./configure make -s -j$(nproc) make deb-kmod mkdir "/scratch/deb-$zfsver-$(uname -r)" cp *.deb "/scratch/deb-$zfsver-$(uname -r)" rm -rf "/scratch/$zfsver-$(uname -r)" exit EOF If you have a trouble pls pm me, i can send you this deb package. (mine is: kmod-zfs-5.9.14-rockchip64_2.0.0-1_arm64.deb ) , I have changed zfsver git t
  8. to install OMV You should login via SSH to your helios then: root@helios64:~# armbian-config root@helios64:~# armbian-config Software -> Softy -> OMV You can verify if it installed: root@helios64:~# dpkg -l |grep openmediavault ii openmediavault 5.5.17-3 all openmediavault - The open network attached storage solution ii openmediavault-fail2ban 5.0.5 all OpenMediaVault Fail2ban plugin ii openmediavault-flashmemory 5.0.7
  9. Hey, Do You using compression on Your ZFS datastore ? Yesterday I wanted to do some cleanup (Previously i have only pool created and all directories were there) ,I created new datastore for each 'directory' like home, docker-data etc. Also I setup them with ACL support and compression: zfs set aclinherit=passthrough data/docker-data zfs set acltype=posixacl data/docker-data zfs set xattr=sa data/docker-data zfs set compression=lz4 data/docker-data I successfully moved 3 directories (about 300 GB each) from main 'data' datastore to 'data'/something . The load wa
  10. Did You tried to jump armbian 20.11 and kernel 5.9. 10. ? Currently Im using 5.9.10 begin friday without any issues. Before it 5.9.9? and i successfully copied more than 2 TB data. (zfs - raid1) Also i ran docker and compiled few times ZFS from sources . The load was big but i don't see any issues.
  11. Hey, I think a dedicated topic for ZFS on Helios 64 is needed. Many people want to have it As we know, and many of us playing with dev builds for helios ,it is no easy to work with ZFS I wrote few scripts maybe someone of You it can help in someway. Thanks @jbergler and @ShadowDance , I used your idea for complete it. The problem is with playing with OMV and ZFS plugin. then dependencies remove our latest version of ZFS. I tested. everything survived but ZFS is downgraded to latest version from repo for example: root@helios64:~# zfs --versio
  12. Hey I was able to build ZFS zfs-2.0.0-rc6 from sources using @jbergler instructions on my helios64 . debian (inside docker ubuntu container) root@helios64:~# uname -a Linux helios64 5.9.10-rockchip64 #trunk.39 SMP PREEMPT Sun Nov 22 10:36:59 CET 2020 aarch64 GNU/Linux root@helios64:~# but now im stuck with old glibc version. root@helios64:~/zfs_deb# zfs zfs: /lib/aarch64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /lib/aarch64-linux-gnu/libzfs.so.4) root@helios64:~/zfs_deb# zfs --version zfs: /lib/aarch64-linux-gnu/libm.so.6: version