Alexander Eiblinger

  • Content Count

  • Joined

  • Last visited

Everything posted by Alexander Eiblinger

  1. From my wishlist: Wake on Lan - yep, I know that Helios64 supports WOL, but practically it is still not working a properly wired 2.5 GB connector - yep, I know, but it is frustrating anyways, especially due to the warranty handling of kobol (send it back on your risk, wait 2 month, send a replacement back, wait another 2 month, hope it works) standard SATA connectors - my ZFS fails rather often due to checksum errors, this seems to be related to bad cables. With the shipped non-standard cables, this is however hard to solve. If the enclose / board had standard connectors, I
  2. Thanks for the hint with the cables, unfortunately they are rather hard to replace Interesting enough, I've today removed the luks layer for all three disks, so my pool is now using the disks directly. As (unexpected) result, scrub now completes without degrading the pool - at least it worked three times today without problems. So it seems for the moment that the lusk encryption was the source of my issue.
  3. Hi, I have major issues with my helios64 / ZFS setup - maybe anyone can give me a good hint. I'm running the following system: Linux helios64 5.9.14-rockchip64 #20.11.4 SMP PREEMPT Tue Dec 15 08:52:20 CET 2020 aarch64 GNU/Linux I have a three WD 4TB Plus disk - on each disk is encrypted with luks. The three encrypted disks are bundled into a raidz1 zpool "archive" using zfs. Basically this setup works pretty good, but with rather high disc usage, e.g. during a scrub, the whole pool degrades due to read / crc errors. As example: zpool status -x pool: archive
  4. That's why I wrote / read 5 GB ... the Helios64 has "only" 4 GB RAM, so if 5 GB are read / writen the cache should be have no useable copy of the data.
  5. Thank you for your answer. I know about ashift=12, my tests have been made with this setting. I also tried your settings (without compression!), but they make no real difference. Using three disks brings things up to ~80MB/s - which is still unacceptable. But I think I found my problem: It is actually related to the blocksize - but not the blocksize of the zfs pool - but the blocksize dd is using: This is what I'm usually doing: root@helios64:/test# dd if=test2.bin of=/dev/null 9765625+0 records in 9765625+0 records out 5000000000 bytes (5.0 GB,
  6. Hi, i have successfully installed my new helios64 and enabled zfs. Everythings works so far, but it seems I'm missing something "important" in regards to ZFS. I did the following tests: I took one of my WD 4TB Plus drives (in slot 3) and formatted the disk with a) ext4 (mkfs.ext4 /dev/sdc) b) btrfs (mkfs.btrfs /dev/sdc) and c) zfs (zpool create test /dev/sdc). Based on these 3 formats I did a "dd if=/dev/zero of=test2.bin bs=1GB count=5" to measure write performance and a "dd if=test2.bin of=/dev/null" to get the read performance (all tests were done 3 times,