Alexander Eiblinger

  • Posts

    9
  • Joined

  • Last visited

Alexander Eiblinger's Achievements

  1. Thanks! Digged a little bi deeper in - the rules are actually mostly from openmediavault, did not realize that there were some set up before! Thanks to both on you for pointing me in the right direction!
  2. Hi, sorry, prop. a stupid question - but I was so far not figuring out the answer. I have a helios64 running - pretty standard installation, including openmediavault and docker, as described on the kobol help page. I realized that there are some firewall/iptables rules are set, as example: tester@helios64:/etc# sudo iptables -L Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED ACCEPT icmp -- anywhere anywhere state NEW,RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:http ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT tcp -- anywhere anywhere tcp dpt:8384 ACCEPT tcp -- anywhere anywhere tcp dpt:3128 ACCEPT tcp -- anywhere anywhere tcp dpt:1443 ACCEPT udp -- anywhere anywhere udp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:microsoft-ds ACCEPT tcp -- anywhere anywhere tcp dpt:netbios-ssn ACCEPT udp -- anywhere anywhere udp dpt:netbios-dgm ACCEPT udp -- anywhere anywhere udp dpt:netbios-ns ACCEPT tcp -- anywhere anywhere tcp dpt:ftp ACCEPT tcp -- anywhere anywhere tcp dpt:49152 ACCEPT tcp -- anywhere anywhere tcp dpt:22000 ACCEPT udp -- anywhere anywhere udp dpt:1900 Chain FORWARD (policy ACCEPT) target prot opt source destination DOCKER-USER all -- anywhere anywhere DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED DOCKER all -- anywhere anywhere ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere ... What I was not able to figure out is, where these rules come from!? In openmediafault no firewall rules are set. I also was not able to find any iptables settings in /etc/network, etc. Anyone an idea where these rules are configured? And with which service they are setup? Thanks!
  3. My Helios is "semi-table". The only issue I'm having is every now an then a "lost" disk in ZFS - usually during high load. uname -a Linux helios64 5.9.14-rockchip64 #20.11.4 SMP PREEMPT Tue Dec 15 08:52:20 CET 2020 aarch64 GNU/Linux
  4. From my wishlist: Wake on Lan - yep, I know that Helios64 supports WOL, but practically it is still not working a properly wired 2.5 GB connector - yep, I know, but it is frustrating anyways, especially due to the warranty handling of kobol (send it back on your risk, wait 2 month, send a replacement back, wait another 2 month, hope it works) standard SATA connectors - my ZFS fails rather often due to checksum errors, this seems to be related to bad cables. With the shipped non-standard cables, this is however hard to solve. If the enclose / board had standard connectors, I could simply use one of the 1000 cables laying around here. more RAM - or support for standard ram modules and yes, the sliders for the disks need a redesign, too (color is cool, but they are rather "cheap" and you need a lot of power to push them in / pull them out)
  5. Thanks for the hint with the cables, unfortunately they are rather hard to replace Interesting enough, I've today removed the luks layer for all three disks, so my pool is now using the disks directly. As (unexpected) result, scrub now completes without degrading the pool - at least it worked three times today without problems. So it seems for the moment that the lusk encryption was the source of my issue.
  6. Hi, I have major issues with my helios64 / ZFS setup - maybe anyone can give me a good hint. I'm running the following system: Linux helios64 5.9.14-rockchip64 #20.11.4 SMP PREEMPT Tue Dec 15 08:52:20 CET 2020 aarch64 GNU/Linux I have a three WD 4TB Plus disk - on each disk is encrypted with luks. The three encrypted disks are bundled into a raidz1 zpool "archive" using zfs. Basically this setup works pretty good, but with rather high disc usage, e.g. during a scrub, the whole pool degrades due to read / crc errors. As example: zpool status -x pool: archive state: DEGRADED status: One or more devices are faulted in response to persistent errors. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Replace the faulted device, or use 'zpool clear' to mark the device repaired. scan: scrub in progress since Sun Jan 10 09:54:30 2021 613G scanned at 534M/s, 363G issued at 316M/s, 1.46T total 7.56M repaired, 24.35% done, 0 days 01:00:57 to go config: NAME STATE READ WRITE CKSUM archive DEGRADED 0 0 0 raidz1-0 DEGRADED 87 0 0 EncSDC DEGRADED 78 0 0 too many errors (repairing) EncSDD FAULTED 64 0 0 too many errors (repairing) EncSDE DEGRADED 56 0 20 too many errors (repairing) errors: No known data errors dmesg shows a lot of these errors: [Jan10 09:59] ata3.00: NCQ disabled due to excessive errors [ +0.000009] ata3.00: exception Emask 0x2 SAct 0x18004102 SErr 0x400 action 0x6 [ +0.000002] ata3.00: irq_stat 0x08000000 [ +0.000004] ata3: SError: { Proto } [ +0.000006] ata3.00: failed command: READ FPDMA QUEUED [ +0.000007] ata3.00: cmd 60/00:08:e8:92:01/08:00:68:00:00/40 tag 1 ncq dma 1048576 in res 40/00:d8:68:8b:01/00:00:68:00:00/40 Emask 0x2 (HSM violation) [ +0.000002] ata3.00: status: { DRDY } [ +0.000004] ata3.00: failed command: READ FPDMA QUEUED [ +0.000006] ata3.00: cmd 60/80:40:e8:9a:01/03:00:68:00:00/40 tag 8 ncq dma 458752 in res 40/00:d8:68:8b:01/00:00:68:00:00/40 Emask 0x2 (HSM violation) [ +0.000002] ata3.00: status: { DRDY } [ +0.000003] ata3.00: failed command: READ FPDMA QUEUED [ +0.000005] ata3.00: cmd 60/80:70:68:9e:01/00:00:68:00:00/40 tag 14 ncq dma 65536 in res 40/00:d8:68:8b:01/00:00:68:00:00/40 Emask 0x2 (HSM violation) [ +0.000003] ata3.00: status: { DRDY } [ +0.000002] ata3.00: failed command: READ FPDMA QUEUED [ +0.000006] ata3.00: cmd 60/80:d8:68:8b:01/01:00:68:00:00/40 tag 27 ncq dma 196608 in res 40/00:d8:68:8b:01/00:00:68:00:00/40 Emask 0x2 (HSM violation) [ +0.000002] ata3.00: status: { DRDY } [ +0.000002] ata3.00: failed command: READ FPDMA QUEUED [ +0.000005] ata3.00: cmd 60/00:e0:e8:8c:01/06:00:68:00:00/40 tag 28 ncq dma 786432 in res 40/00:d8:68:8b:01/00:00:68:00:00/40 Emask 0x2 (HSM violation) [ +0.000002] ata3.00: status: { DRDY } [ +0.000007] ata3: hard resetting link [ +0.475899] ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [ +0.001128] ata3.00: configured for UDMA/133 [ +0.000306] scsi_io_completion_action: 8 callbacks suppressed [ +0.000009] sd 2:0:0:0: [sdd] tag#1 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s [ +0.000006] sd 2:0:0:0: [sdd] tag#1 Sense Key : 0x5 [current] [ +0.000003] sd 2:0:0:0: [sdd] tag#1 ASC=0x21 ASCQ=0x4 [ +0.000006] sd 2:0:0:0: [sdd] tag#1 CDB: opcode=0x88 88 00 00 00 00 00 68 01 92 e8 00 00 08 00 00 00 [ +0.000003] print_req_error: 8 callbacks suppressed [ +0.000003] blk_update_request: I/O error, dev sdd, sector 1744933608 op 0x0:(READ) flags 0x700 phys_seg 16 prio class 0 [ +0.000019] zio pool=archive vdev=/dev/mapper/EncSDC error=5 type=1 offset=893389230080 size=1048576 flags=40080cb0 [ +0.000069] sd 2:0:0:0: [sdd] tag#8 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s [ +0.000005] sd 2:0:0:0: [sdd] tag#8 Sense Key : 0x5 [current] [ +0.000003] sd 2:0:0:0: [sdd] tag#8 ASC=0x21 ASCQ=0x4 [ +0.000004] sd 2:0:0:0: [sdd] tag#8 CDB: opcode=0x88 88 00 00 00 00 00 68 01 9a e8 00 00 03 80 00 00 [ +0.000004] blk_update_request: I/O error, dev sdd, sector 1744935656 op 0x0:(READ) flags 0x700 phys_seg 7 prio class 0 [ +0.000014] zio pool=archive vdev=/dev/mapper/EncSDC error=5 type=1 offset=893390278656 size=458752 flags=40080cb0 [ +0.000049] sd 2:0:0:0: [sdd] tag#14 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s [ +0.000004] sd 2:0:0:0: [sdd] tag#14 Sense Key : 0x5 [current] [ +0.000004] sd 2:0:0:0: [sdd] tag#14 ASC=0x21 ASCQ=0x4 [ +0.000003] sd 2:0:0:0: [sdd] tag#14 CDB: opcode=0x88 88 00 00 00 00 00 68 01 9e 68 00 00 00 80 00 00 [ +0.000004] blk_update_request: I/O error, dev sdd, sector 1744936552 op 0x0:(READ) flags 0x700 phys_seg 16 prio class 0 [ +0.000013] zio pool=archive vdev=/dev/mapper/EncSDC error=5 type=1 offset=893390737408 size=65536 flags=1808b0 The S.M.A.R.T values of the discs are OK - only "UDMA_CRC_ERROR_COUNT" are increased (values ~25, increasing). What's also worth mentioning: If I start a scrub I get these errors after about 10 seconds (rather quickly) - but it seems that if I let the scrub continue, there are no more errors occuring. Does this indicate bad discs? The discs itself do not indicate any problems (selftest does not show issues). Is this related to the zfs implementations for armbian? Anything I can do to get this reliable? Thank!
  7. That's why I wrote / read 5 GB ... the Helios64 has "only" 4 GB RAM, so if 5 GB are read / writen the cache should be have no useable copy of the data.
  8. Thank you for your answer. I know about ashift=12, my tests have been made with this setting. I also tried your settings (without compression!), but they make no real difference. Using three disks brings things up to ~80MB/s - which is still unacceptable. But I think I found my problem: It is actually related to the blocksize - but not the blocksize of the zfs pool - but the blocksize dd is using: This is what I'm usually doing: root@helios64:/test# dd if=test2.bin of=/dev/null 9765625+0 records in 9765625+0 records out 5000000000 bytes (5.0 GB, 4.7 GiB) copied, 107.466 s, 46.5 MB/s DD is using a blocksize of 512 here. For ext4 / btrfs this seems to be no issue - but zfs has a topic with this. If I use explicity a blocksize of 4096, I get these results: root@helios64:/test# dd if=test2.bin of=/dev/null bs=4096 1220703+1 records in 1220703+1 records out 5000000000 bytes (5.0 GB, 4.7 GiB) copied, 27.4704 s, 182 MB/s Which gives the figures expected! So as so often: "Layer 8 problem" - the problem sits in front of the screen.
  9. Hi, i have successfully installed my new helios64 and enabled zfs. Everythings works so far, but it seems I'm missing something "important" in regards to ZFS. I did the following tests: I took one of my WD 4TB Plus drives (in slot 3) and formatted the disk with a) ext4 (mkfs.ext4 /dev/sdc) b) btrfs (mkfs.btrfs /dev/sdc) and c) zfs (zpool create test /dev/sdc). Based on these 3 formats I did a "dd if=/dev/zero of=test2.bin bs=1GB count=5" to measure write performance and a "dd if=test2.bin of=/dev/null" to get the read performance (all tests were done 3 times, results averaged) The results I got look like this: a) ext4 ... avg write 165MB/s - read 180MB/s b) btrfs ... avg write 180MB/s - read 192MB/s c) zfs ... avg write 140MB/s - read 45MB/s (!) So while ext4 and btrfs produce for sequential read / write pretty much the same (and expected) results, zfs seems to be significantly slower in write and dramaticially slower in read performance. I need to admit, I'm new to zfs ... but is this expected / normal? zfs has tonns of parameters to tune performance - are there any settings which are "needed" to achive almost the same values as e.g. with btrfs? (honestly, I expected to perform zfs equally to btrfs - as zfs should be the more "mature" file system ...) thanks! A