Alexander Eiblinger

  • Content Count

    6
  • Joined

  • Last visited

Reputation Activity

  1. Like
    Alexander Eiblinger got a reaction from lanefu in Feature / Changes requests for future Helios64 board or enclosure revisions   
    From my wishlist:
    Wake on Lan - yep, I know that Helios64 supports WOL, but practically it is still not working a properly wired 2.5 GB connector - yep, I know, but it is frustrating anyways, especially due to the warranty handling of kobol (send it back on your risk, wait 2 month, send a replacement back, wait another 2 month, hope it works) standard SATA connectors - my ZFS fails rather often due to checksum errors, this seems to be related to bad cables. With the shipped non-standard cables, this is however hard to solve. If the enclose / board had standard connectors, I could simply use one of the 1000 cables laying around here. more RAM - or support for standard ram modules and yes, the sliders for the disks need a redesign, too (color is cool, but they are rather "cheap" and you need a lot of power to push them in / pull them out)  
  2. Like
    Alexander Eiblinger got a reaction from deucalion in Feature / Changes requests for future Helios64 board or enclosure revisions   
    From my wishlist:
    Wake on Lan - yep, I know that Helios64 supports WOL, but practically it is still not working a properly wired 2.5 GB connector - yep, I know, but it is frustrating anyways, especially due to the warranty handling of kobol (send it back on your risk, wait 2 month, send a replacement back, wait another 2 month, hope it works) standard SATA connectors - my ZFS fails rather often due to checksum errors, this seems to be related to bad cables. With the shipped non-standard cables, this is however hard to solve. If the enclose / board had standard connectors, I could simply use one of the 1000 cables laying around here. more RAM - or support for standard ram modules and yes, the sliders for the disks need a redesign, too (color is cool, but they are rather "cheap" and you need a lot of power to push them in / pull them out)  
  3. Like
    Alexander Eiblinger got a reaction from ShadowDance in zfs read vs. write performance   
    Thank you for your answer.
    I know about ashift=12, my tests have been made with this setting. I also tried your settings (without compression!), but they make no real difference. 
    Using three disks brings things up to ~80MB/s - which is still unacceptable. 
     
    But I think I found my problem:
    It is actually related to the blocksize - but not the blocksize of the zfs pool - but the blocksize dd is using:
     
    This is what I'm usually doing:
     
    root@helios64:/test# dd if=test2.bin of=/dev/null 9765625+0 records in 9765625+0 records out 5000000000 bytes (5.0 GB, 4.7 GiB) copied, 107.466 s, 46.5 MB/s  
    DD is using a blocksize of 512 here. For ext4 / btrfs this seems to be no issue - but zfs has a topic with this. 
    If I use explicity a blocksize of 4096, I get these results:

     
    root@helios64:/test# dd if=test2.bin of=/dev/null bs=4096 1220703+1 records in 1220703+1 records out 5000000000 bytes (5.0 GB, 4.7 GiB) copied, 27.4704 s, 182 MB/s  
    Which gives the figures expected!
     
    So as so often: "Layer 8 problem"  - the problem sits in front of the screen.