Jump to content

Recommended Posts

Posted

I have just put my Helios64 into operation. I have 5 HDD with 10TB each. Three HDD I mounted with ZFS (0.86) to a raidz1. the other two disks I mounted individually, formatting one disk with XFS and the last disk already had NTFS as filesystem. On this disk (NTFS) I have some very large *.vhdx files which I only use for copy tests.

I use the console with the Mitnight Commander for this.

 

The copying from the NTFS disk to the XFS disk reached a speed of about 130 MB/s.

Copying from the XFS disk to the ZFS raid reached a speed of about 100 - 110 MB/s.

 

I cannot do anything with these values alone. Can someone please tell me if these speeds fit like this?

 

Rosi

 

Posted

What was your speed when the disks were mounted in a different machine?

7,2rpm sata drives usually sustain these kind of numbers for sequential reads and writes, more speed can be attained by adding more spindles (drives). Although I think your read speed potential should be a bit higher. Try piping a file to /dev/null to measure the real read speed of one disk first and piping /dev/zero (make it larger than the cache) to a file on that disk or another disk to get the max write speed.

There's some more benchmarking options listed in this post on ubuntuforums:

https://askubuntu.com/a/991311

 

Br

Seneca

Posted

Measuring disk performance is hard.

 

You might want to try this command

 

dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; unlink test

Posted

You can monitor your ZFS speed by issuing "zpool iostat x" with x being the time between each report in seconds. You can put some crazy value there, like 0.001s.
There is one catch... I think the read/write values are the cumulative speed of each disk (with the parity), and not the true usable data R/W of the pool.


On my helios, I'm getting about 140MBps true reads and 80MBps true writes for a raidz2 on 4 very old and already dying hdd.
2x   https://www.newegg.com/hitachi-gst-ultrastar-a7k1000-hua721010kla330-1tb/p/N82E16822145164  
2x   https://www.newegg.com/seagate-barracuda-es-2-st31000340ns-1tb/p/N82E16822148278



WRITING TO from dd point of view:
image.png.4613f580f8efc483d40d3af4c18e9c38.png
 

The same write from iostat point of view
image.png.b25b225b5acd970210902445055522a4.png



And wow, ZFS is learning fast... Files get cached on RAM. I get crazy reading speeds after successive reads on the same file:
image.png.0f9599cf7f6a210ba61fbfac5dfcd358.png



I also tried to copy a large folder from an NTFS USB SSD to ZFS and the write speed reported by zpool iostat was about 140MBps. So yeah. This is not a definitive answer about write speed. But our measurements are on the same ballpark at least.
I believe the write speed could be optimized but only at the cost of general purpose usability. For my use cases it is enough speed and redundancy. I won't go into a speed race to find later that I trapped myself in a corner case.

 

Edit: it is looking like the caching is actually done by the kernel in this case (I have not configured any ZFS cache).

If someone want to do make an automated read/write test script, you can flush the kernel like described here:

 

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines