Jump to content

Alexander Eiblinger

Members
  • Posts

    16
  • Joined

  • Last visited

Reputation Activity

  1. Like
    Alexander Eiblinger reacted to aprayoga in Kobol Team is pulling the plug ;(   
    Hi everyone, thank you for all of your support.
    I'll be still around but not in full time as before.
     
    I saw Helios64 has several issues in this new release. I will start looking into it.
  2. Like
    Alexander Eiblinger reacted to meymarce in SATA issue, drive resets: ataX.00: failed command: READ FPDMA QUEUED   
    I am having the same issue: though I wanna mention that I use Seagate drives. So the issue is not restricted to WD. I can reliably get it using a zfs scrub. Currently 4 disks are in use and it always happens on /dev/sdd, so disk 4. Disk 5 is the empty slot.
    I have read most of the comments but did anyone really succeed with a new power harness or the new firmware?
    [16218.668509] ata4.00: exception Emask 0x10 SAct 0x1000084 SErr 0xb80100 action 0x6 frozen [16218.668523] ata4.00: irq_stat 0x08000000 [16218.668530] ata4: SError: { UnrecovData 10B8B Dispar BadCRC LinkSeq } [16218.668539] ata4.00: failed command: READ FPDMA QUEUED [16218.668551] ata4.00: cmd 60/b8:10:e0:43:37/07:00:11:03:00/40 tag 2 ncq dma 1011712 in res 40/00:38:98:4b:37/00:00:11:03:00/40 Emask 0x10 (ATA bus error) [16218.668556] ata4.00: status: { DRDY } [16218.668562] ata4.00: failed command: READ FPDMA QUEUED [16218.668574] ata4.00: cmd 60/b8:38:98:4b:37/07:00:11:03:00/40 tag 7 ncq dma 1011712 in res 40/00:38:98:4b:37/00:00:11:03:00/40 Emask 0x10 (ATA bus error) [16218.668578] ata4.00: status: { DRDY } [16218.668584] ata4.00: failed command: READ FPDMA QUEUED [16218.668595] ata4.00: cmd 60/b8:c0:58:53:37/07:00:11:03:00/40 tag 24 ncq dma 1011712 in res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x14 (ATA bus error) [16218.668600] ata4.00: status: { DRDY } [16218.668612] ata4: hard resetting link [16219.144463] ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [16219.163619] ata4.00: configured for UDMA/133 [16219.164082] ata4.00: device reported invalid CHS sector 0 [16219.164110] sd 3:0:0:0: [sdd] tag#2 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=31s [16219.164116] sd 3:0:0:0: [sdd] tag#2 Sense Key : 0x5 [current] [16219.164121] sd 3:0:0:0: [sdd] tag#2 ASC=0x21 ASCQ=0x4 [16219.164127] sd 3:0:0:0: [sdd] tag#2 CDB: opcode=0x88 88 00 00 00 00 03 11 37 43 e0 00 00 07 b8 00 00 [16219.164133] blk_update_request: I/O error, dev sdd, sector 13173736416 op 0x0:(READ) flags 0x700 phys_seg 125 prio class 0 [16219.164148] zio pool=r2pool vdev=/dev/disk/by-id/ata-ST10000VN0004-1ZD101_$serial-part1 error=5 type=1 offset=6744951996416 size=1011712 flags=40080cb0 [16219.164209] sd 3:0:0:0: [sdd] tag#7 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=31s [16219.164213] sd 3:0:0:0: [sdd] tag#7 Sense Key : 0x5 [current] [16219.164218] sd 3:0:0:0: [sdd] tag#7 ASC=0x21 ASCQ=0x4 [16219.164222] sd 3:0:0:0: [sdd] tag#7 CDB: opcode=0x88 88 00 00 00 00 03 11 37 4b 98 00 00 07 b8 00 00 [16219.164227] blk_update_request: I/O error, dev sdd, sector 13173738392 op 0x0:(READ) flags 0x700 phys_seg 142 prio class 0 [16219.164235] zio pool=r2pool vdev=/dev/disk/by-id/ata-ST10000VN0004-1ZD101_$serial-part1 error=5 type=1 offset=6744953008128 size=1011712 flags=40080cb0 [16219.164286] sd 3:0:0:0: [sdd] tag#24 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=31s [16219.164291] sd 3:0:0:0: [sdd] tag#24 Sense Key : 0x5 [current] [16219.164295] sd 3:0:0:0: [sdd] tag#24 ASC=0x21 ASCQ=0x4 [16219.164299] sd 3:0:0:0: [sdd] tag#24 CDB: opcode=0x88 88 00 00 00 00 03 11 37 53 58 00 00 07 b8 00 00 [16219.164304] blk_update_request: I/O error, dev sdd, sector 13173740376 op 0x0:(READ) flags 0x700 phys_seg 127 prio class 0 [16219.164311] zio pool=r2pool vdev=/dev/disk/by-id/ata-ST10000VN0004-1ZD101_$serial-part1 error=5 type=1 offset=6744954023936 size=1011712 flags=40080cb0 [16219.164339] ata4: EH complete  
  3. Like
    Alexander Eiblinger reacted to lyuyhn in Information on Autoshutdown / sleep / WOL?   
    Any news on this side? I'm really looking forward to be able to suspend and use wol!
  4. Like
    Alexander Eiblinger reacted to Gareth Halfacree in SATA issue, drive resets: ataX.00: failed command: READ FPDMA QUEUED   
    It's the first of the month, so another automated btrfs scrub took place - and again triggered a bunch of DRDY drive resets on ata2.00. Are we any closer to figuring out what the problem is, here?
  5. Like
    Alexander Eiblinger got a reaction from lanefu in Feature / Changes requests for future Helios64 board or enclosure revisions   
    From my wishlist:
    Wake on Lan - yep, I know that Helios64 supports WOL, but practically it is still not working a properly wired 2.5 GB connector - yep, I know, but it is frustrating anyways, especially due to the warranty handling of kobol (send it back on your risk, wait 2 month, send a replacement back, wait another 2 month, hope it works) standard SATA connectors - my ZFS fails rather often due to checksum errors, this seems to be related to bad cables. With the shipped non-standard cables, this is however hard to solve. If the enclose / board had standard connectors, I could simply use one of the 1000 cables laying around here. more RAM - or support for standard ram modules and yes, the sliders for the disks need a redesign, too (color is cool, but they are rather "cheap" and you need a lot of power to push them in / pull them out)  
  6. Like
    Alexander Eiblinger got a reaction from deucalion in Feature / Changes requests for future Helios64 board or enclosure revisions   
    From my wishlist:
    Wake on Lan - yep, I know that Helios64 supports WOL, but practically it is still not working a properly wired 2.5 GB connector - yep, I know, but it is frustrating anyways, especially due to the warranty handling of kobol (send it back on your risk, wait 2 month, send a replacement back, wait another 2 month, hope it works) standard SATA connectors - my ZFS fails rather often due to checksum errors, this seems to be related to bad cables. With the shipped non-standard cables, this is however hard to solve. If the enclose / board had standard connectors, I could simply use one of the 1000 cables laying around here. more RAM - or support for standard ram modules and yes, the sliders for the disks need a redesign, too (color is cool, but they are rather "cheap" and you need a lot of power to push them in / pull them out)  
  7. Like
    Alexander Eiblinger got a reaction from ShadowDance in zfs read vs. write performance   
    Thank you for your answer.
    I know about ashift=12, my tests have been made with this setting. I also tried your settings (without compression!), but they make no real difference. 
    Using three disks brings things up to ~80MB/s - which is still unacceptable. 
     
    But I think I found my problem:
    It is actually related to the blocksize - but not the blocksize of the zfs pool - but the blocksize dd is using:
     
    This is what I'm usually doing:
     
    root@helios64:/test# dd if=test2.bin of=/dev/null 9765625+0 records in 9765625+0 records out 5000000000 bytes (5.0 GB, 4.7 GiB) copied, 107.466 s, 46.5 MB/s  
    DD is using a blocksize of 512 here. For ext4 / btrfs this seems to be no issue - but zfs has a topic with this. 
    If I use explicity a blocksize of 4096, I get these results:

     
    root@helios64:/test# dd if=test2.bin of=/dev/null bs=4096 1220703+1 records in 1220703+1 records out 5000000000 bytes (5.0 GB, 4.7 GiB) copied, 27.4704 s, 182 MB/s  
    Which gives the figures expected!
     
    So as so often: "Layer 8 problem"  - the problem sits in front of the screen. 
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines