Jump to content

Slow write speeds. Helios64. (tested SATA and USB UAS DAS)


slymanjojo

Recommended Posts

Linux Hell64 5.8.17-rockchip64 #20.08.21 SMP PREEMPT Sat Oct 31 08:22:59 CET 2020 aarch64 GNU/Linux

 

I've been testing the performance and comparing to another NAS I have (WD PR4100)  and the Helios64 performance has not been fairing well.

Setup are almost identical on the same switch connected gig ethernet.  

Using SAMBA ,NO Raid.  Testing with SATA drives  HD's previously formatted for NTFS.  and UAS USB 8 Bay DAS.

 

Reading from Helios64 to Windows 10 PC I achieve expected 113MB/s and from USB attached to Helios 110MB/s.

Writing speeds are very poor and as can be expected worse on the directly attached USB DAS.

 

WIN10 -> Helios64  eth0 SAMBA 
-----------------------------
Test 1= ~ 45MB/s 
Test 2= ~ 41MB/s 
WIN10 -> Helios64 USB (UAS NTFS)
Test 3= ~ 8-38MB/s (Fluctuate like a yoyo back and forth between 8 and 38MB/s)

 

To give you a comparison with my WD PR4100

 

IN10 -> WD PR4100 NAS  GIG eth0 SAMBA
----------------------------------
Test 1= ~ 109MB/s (write)
Test 2= ~ 105MB/s (write)

PR4100 -> WIN10 PC GIG eth0 SAMBA
------------------------------------
Test 3= ~ 112-123MB/s (Read)
Test 4= ~ 113MB/s (Read)

 

If I run the hdparm -tT on the directly attached and USB drives the performance is as expected.

Trying to determine why such the poor performance from the helios64 NAS compared to the WD PR4100.

 

 

/dev/sdc: (Directly attached SATA drive; NTFS)  
 Timing cached reads:   2478 MB in  2.00 seconds = 1238.83 MB/sec
 Timing buffered disk reads: 710 MB in  3.01 seconds = 236.11 MB/sec

 

/dev/sdm:  (USB UAS DAS)
 Timing cached reads:   2240 MB in  2.00 seconds = 1120.03 MB/sec
 Timing buffered disk reads: 550 MB in  3.01 seconds = 182.87 MB/sec

 

 

Link to comment
Share on other sites

Thanks for your response!

 

No.. Not using OMV. 
(Was not certain if I loaded OMV if it would wipe out my huge collections of Movies/TV-Shows so played it safe; I think I read somewhere OMV will not work with NTFS and will want to reformat.)     -  5x SATA mounted using fstab. 


Here are the results of the FTP tests.  I get 39.57MB/s  (far cry than expected 113MB/s over GIG) but consistent with my SMB tests.

-snip-

226 Transfer complete.
ftp: 17242487284 bytes sent in 435.70Seconds 39573.77Kbytes/sec.
ftp>

-end snip-

 

** Do you think the issue is SMP affinity????? If yes; any suggestions?

 
Looking at cat /proc/interrupts; 
          CPU0       CPU1       CPU2       CPU3       CPU4       CPU5
27:    1701696          0          0          0          0          0     GICv3  44 Level     eth0


I can see eth0 is served by CPU0 (INT 27) could not find which interupt is being used by sata; I assume also served by CPU0 ?


Tried reading a little on SMP affinity and on CONFIG_HOTPLUG_CPU. Not sure what tweaking is required if any!?

*** should I entertain to move the eth0 27 INT over to CPU1 ?

*** Is this how it is done?

echo "2" > /proc/irq/27/smp_affinity

I have no idea what I am doing; Please advise... will this be optimized in upcoming Kernel 5.9? or... any suggestions for tweaking would be very helpful.
 

Link to comment
Share on other sites

I didn't read properly your first message, and miss out a very important information your disk formatted in NTFS :-/  You know that there can be a big write performance penalty using NTFS partition on Linux since it is handle in userspace via libfuse.

 

Can you do a local write test with dd to the NTFS partition to see the performance ?

 

Honestly you should migrate to another file system than NTFS.

 

I did some test again and I max out gigabit when writing from PC --> Helios64 over SMB or FTP.

Link to comment
Share on other sites

I don't know the reason for having NTFS on these disks. Maybe laziness, maybe concerns to simply take out the disk and put it back into your Windows PC for recovery etc...

but just want to let you know that also for Windows exist driver to make real file systems like ext4 available. So no need to be afraid to travel into unknown territory and format them for ext4 or any other awesome fs like zfs for example :)

Link to comment
Share on other sites

Gentlemen, 

**Issue solved.   :thumbup:  

 

 @gprovost; I tested with ext4 and now max out gigabit; PC --> Helios64 over SMB or FTP  --> and. USB 3.1 UAS 

                     - I had no idea how bad NTFS was, and how much of a penalty hit.  

                     ** Thanks for testing, saved me the trouble of trying to tweak samba and mocking up things for no reason.

                     

@Werner; Yes my main concern was portability and flexibility ; I have two 8bay DAS enclosures with 8TB disks all partitioned with NTFS.  and I do use WIN 10.

                   -  I might fall into the lazy category but having 23 disks of at least 8TB; to backup and restore this amount of data is a daunting task.  sheer amount of time required to backup and restore....    

                   - I read up on zfs; I was very tempted but seems some folks complaining of some issues in other posts.   

                   - As you mentioned about WIN10 and ext4. I did find Ext2Fsd but have not tested this yet.  

                     do you know of any win10 solution for rw access to zfs?  

 

Gentlemen,  thank you for your great support and help.    

 

Enjoying my learning experience: This Helios64 turning out to be a great project!!! 

 

Cheers!

Edited by slymanjojo
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines