roswitina Posted December 23, 2020 Posted December 23, 2020 I have just put my Helios64 into operation. I have 5 HDD with 10TB each. Three HDD I mounted with ZFS (0.86) to a raidz1. the other two disks I mounted individually, formatting one disk with XFS and the last disk already had NTFS as filesystem. On this disk (NTFS) I have some very large *.vhdx files which I only use for copy tests. I use the console with the Mitnight Commander for this. The copying from the NTFS disk to the XFS disk reached a speed of about 130 MB/s. Copying from the XFS disk to the ZFS raid reached a speed of about 100 - 110 MB/s. I cannot do anything with these values alone. Can someone please tell me if these speeds fit like this? Rosi
Seneca Posted December 26, 2020 Posted December 26, 2020 What was your speed when the disks were mounted in a different machine? 7,2rpm sata drives usually sustain these kind of numbers for sequential reads and writes, more speed can be attained by adding more spindles (drives). Although I think your read speed potential should be a bit higher. Try piping a file to /dev/null to measure the real read speed of one disk first and piping /dev/zero (make it larger than the cache) to a file on that disk or another disk to get the max write speed. There's some more benchmarking options listed in this post on ubuntuforums: https://askubuntu.com/a/991311 Br Seneca
lanefu Posted December 26, 2020 Posted December 26, 2020 Measuring disk performance is hard. You might want to try this command dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; unlink test
tionebrr Posted December 26, 2020 Posted December 26, 2020 You can monitor your ZFS speed by issuing "zpool iostat x" with x being the time between each report in seconds. You can put some crazy value there, like 0.001s. There is one catch... I think the read/write values are the cumulative speed of each disk (with the parity), and not the true usable data R/W of the pool. On my helios, I'm getting about 140MBps true reads and 80MBps true writes for a raidz2 on 4 very old and already dying hdd. 2x https://www.newegg.com/hitachi-gst-ultrastar-a7k1000-hua721010kla330-1tb/p/N82E16822145164 2x https://www.newegg.com/seagate-barracuda-es-2-st31000340ns-1tb/p/N82E16822148278 WRITING TO from dd point of view: The same write from iostat point of view And wow, ZFS is learning fast... Files get cached on RAM. I get crazy reading speeds after successive reads on the same file: I also tried to copy a large folder from an NTFS USB SSD to ZFS and the write speed reported by zpool iostat was about 140MBps. So yeah. This is not a definitive answer about write speed. But our measurements are on the same ballpark at least. I believe the write speed could be optimized but only at the cost of general purpose usability. For my use cases it is enough speed and redundancy. I won't go into a speed race to find later that I trapped myself in a corner case. Edit: it is looking like the caching is actually done by the kernel in this case (I have not configured any ZFS cache). If someone want to do make an automated read/write test script, you can flush the kernel like described here: 2
roswitina Posted January 1, 2021 Author Posted January 1, 2021 Thank you for the information. I decided to use zfs (raidz1) with three disks.
Recommended Posts