Jump to content

H3 devices as NAS


Recommended Posts

Yeah you'll have to be "careful" with usb sata bridges on those boards i guess.

 

I wanted to try something on my pcduino3 nano (A20) but the two usb host ports don't seem to be able to power any of my 2.5 hdd enclosures, i can only get one of my ssd to be powered from there.

 

I'm using a bench power supply to avoid any problems, and tried from the default usb power plug, and gpio +5 DCin pin but no luck.

Using an usb hub with external power works but that's a lot of things i don't want to add.

 

I'm a bit surprised, although i haven't used that board a lot and only with its sata connector and an external hdd power supply so i didn't really test the usb ports with power devices..

 

Anyways those linksprite boards never inspired me much confidence, and they are a bit pricy, so i think i'll just ditch them and turn to H3 boxes or boards.

I have no problem powering those hdd enclosures with my beelink x2 or opi pc with a stock 2A supply..

Link to comment
Share on other sites

well i've ditched my pc3duino3 nano which is simply a complete mess when it comes to USB.

 

Using xenial vanilla kernel, i'm trying to use my Orange Pi PC with an usb3 Gbe adapter which is that one :

Bus 003 Device 002: ID 0b95:1790 ASIX Electronics Corp. AX88179 Gigabit Ethernet

and its driver should be built as module according to /boot/config-3.4.112-sun8i :

CONFIG_USB_USBNET=m
CONFIG_USB_NET_AX8817X=m
 
but that module is nowhere to be found in /lib/modules, am i missing something ?
thx
 
--
 
humm that 3.4.112 kernel is missing CONFIG_USB_NET_AX88179_178A
 
--
 
okay the official "android" sources seem to work out of the box : 
 
That USB2-Gbe link gives me a steady bi directional 340Mbps (iperf) which is not bad i guess.
 
tests with performance or interactive governors gave the same result, performance gives a steady 340Mbps.
 
i didn't expect to get the full 480Mps but i'm not sure how high you can aim. tkaiser got 270-300Mbps on his opi lite from the otg port so it'll probaby be the range of speeds you can expect.
 
here's where i bought that adapter (sunsky-online), a bit expensive :
Edited by mdel
Link to comment
Share on other sites

quick followup 

 

For some reason my AX88179_178A shows dramatically different performances depending on the way you do the iperf / nfs tests.

When the boards acts as the client then the speed drops to 95Mbps on the opi pc and 104Mbps on the pcduino nano. I tested that adapter on another device to make sure it was not related to the driver i downloaded, the pcduino has the armbian vailla 4.6 kernel with built in driver.

 

If it's related to my local network, in both cases the board is connected to my desktop through a manageable BGe switch.

But i can do 980Mbps in bot directions from that pc to another local server, through that same switch.

 

i now use the same JMicron enclosure as tkaiser used in it's UAS test, although i'm not doing UAS test yet and i've populated that enclosure with a 250G evo750 SSD.

 

So, local performances on that usb-ssd, using an ext4 partition are pretty close to what tkaiser had in his test, speeds around 35MB/s (although dd claims it's 45MB/s) and iozone tests are pretty close (a little higher considering i'm using a SSD).

 

i then setup nfs4 and tested transfering numerous large files >2GB, i'm using the Gbe USB3 on the otg port in order to avoid sharing usb hosts as i'm not sure what's populated on the opi pc (3 usb + otg).

 

Writing files to the SSD on the opi worked fine, i'm getting the "full" 35MB/s write speed and with some caching in the background, the client sees a speed around 42MB/s.

Unfortunately due the that Tx low speed identified above with iperf, reading from the opi is always capped at around 95Mbps.

 

Using a ramfs partition, i can max out Rx speed at 340Mbps, but Tx is still not working properly.

Not sure what's going on with that Gbe adapter.

 

One quick note though, i started doing network read/write tests using the -F "file" option of iperf3 but the disk access is extremely slow and shows speeds around 17MB/s, but as it's only happening with iperf i assume it's due to its way of writing data, so don't use that..

Link to comment
Share on other sites

So, local performances on that usb-ssd, using an ext4 partition are pretty close to what tkaiser had in his test, speeds around 35MB/s (although dd claims it's 45MB/s) and iozone tests are pretty close (a little higher considering i'm using a SSD)

 

That's the reason why dd should be avoided when doing any IO measurements. And you shouldn't be too surprised that USB-Ethernet adapters show unbalanced performance depending on the direction. If you get 300 Mbits/sec in either direction you're a lucky guy.

 

how do you read lsusb outputs to know if your devices are plugged on different physical usb hosts or on a single host through an integrated usb hub ?

 

Simply check bus numbers. Or linux-sunxi wiki where device details for nearly every H3 device are listed (internal USB hub or not)

Link to comment
Share on other sites

@tkaiser

 

did you try ipv6 on your gbe adapter bandwidth tests ?

 

Out of curiosity (and frustration) i tested iperf3 ipv6 and the results were quite catastrophic, i can barely get above 100Mbps in both directions..

I'm wondering if i screwed something up along the way, didn't test other devices yet.

 

I'm still wondering why we get so different performances using the same gbe adapter on various hosts / systems (laptops, win 7 10), is it related to drivers or usb2/3 performance of the host ?

 

And basically is the mention of "gigabit" only for the sticker or should some of those adapters actually provide Gbe ?

thx

Link to comment
Share on other sites

did you try ipv6 on your gbe adapter bandwidth tests ?

 

Never. I have two other USB-Ethernet dongles, one with 'Gigabit Lan' claim from SMSC and Apples pricy Fast Ethernet dongle. Guess what? Apple's Fast Ethernet adapter is faster in iperf' --dualtest mode :)

 

IMO it's better to have zero expectations when buying these dongles... prevents frustration. I bought the two RTL8153 based dongles since @Rodolfo recommended them and at least they do not suck too much. But I wouldn't have been surprised if they were a fail since as we all know there are market places where fakes (be it whole products or 'just' the used main IC) can be considered the norm.

 

As an example: you can get Fast Ethernet adapters that claim to contain an ASIX AX88772A but in reality it's just a cheap clone: https://projectgus.com/2013/03/anatomy-of-a-cheap-usb-ethernet-adapter/-- why shouldn't happen with your more expensive AX88179 what already happened with its little/cheaper sibling? ;)

Link to comment
Share on other sites

IMO it's better to have zero expectations when buying these dongles... prevents frustration

 

Most USB2-based add-ons ( SATA adapters, LAN sticks, flash drives ) have vastly inferior performance compared to the newer USB3 stuff as the bandwidth limitation of USB2 does not really merit faster components. On the other hand USB3-based gear uses faster components and the limitation will be USB2 speed.

 

USB3 Gigabit dongle + USB3 SATA adapter ( or 2x USB3 nano flash drives for low power use ) are combinations working very well with OPI LITE.

Link to comment
Share on other sites

Well I've just hit a blocker for my usage, no XFS or BTRFS support using the kernel included in the legacy images for the OPi+ 2E. Could use ext4 but it's not recommended because of how Ceph relies on xattrs. I guess I want to upgrade to mainline, any problems with that?

 

Separate note some USB 3.0 to eSATA adapters I ordered got delivered, crazy good speeds, like 450MB/s sequential access. JMS567s at their hearts. Definitely agreed on USB 3 stuff being a good bet.

Link to comment
Share on other sites

I guess I want to upgrade to mainline, any problems with that?

 

Separate note some USB 3.0 to eSATA adapters I ordered got delivered, crazy good speeds, like 450MB/s sequential access. JMS567s at their hearts. Definitely agreed on USB 3 stuff being a good bet.

 

Well, 450MB/s is a clear sign of testing DRAM and not disks since H3 features only USB 2.0 ports and with legacy kernel you get ~35MB/s max (maybe 37 under best conditions). If I were you I would use exactly the three iozone calls from here to test difference between legacy and mainline kernel with your disk setup: http://linux-sunxi.org/USB/UAS

 

Regarding mainline kernel: you would've to build an image yourself or read through post #8 of this thread to recycle an older preliminary OS image for BPi M2+ (will work out of the box on OPi Plus 2E since BPi M2+ is just a lousy clone but one of the USB host ports is not enabled since not available on BPi M2+). If you apply a heatsink to H3 on OPi Plus 2E you're pretty safe regarding overheating (still no throttling implemented in mainline kernel for H3, but thermal design of Orange Pis is pretty good).

 

BTW: I personally would never ever use Ceph on any node lacking ECC RAM. But that's just a personal opinion, I really hate data corruption and have to deal with such cases maybe a little bit too often :)

Link to comment
Share on other sites

Well, 450MB/s is a clear sign of testing DRAM and not disks since H3 features only USB 2.0 ports and with legacy kernel you get ~35MB/s max (maybe 37 under best conditions). If I were you I would use exactly the three iozone calls from here to test difference between legacy and mainline kernel with your disk setup: http://linux-sunxi.org/USB/UAS

 

Regarding mainline kernel: you would've to build an image yourself or read through post #8 of this thread to recycle an older preliminary OS image for BPi M2+ (will work out of the box on OPi Plus 2E since BPi M2+ is just a lousy clone but one of the USB host ports is not enabled since not available on BPi M2+). If you apply a heatsink to H3 on OPi Plus 2E you're pretty safe regarding overheating (still no throttling implemented in mainline kernel for H3, but thermal design of Orange Pis is pretty good).

 

BTW: I personally would never ever use Ceph on any node lacking ECC RAM. But that's just a personal opinion, I really hate data corruption and have to deal with such cases maybe a little bit too often :)

Yeah that speed comes from attaching one and an ssd to my desktop, it's more of a proof that there's spare capacity in these things and if I ever wanted an ssd attached to a USB 3 capable ARM board they'd be a good choice :)

 

Thanks, I'll give that image a test on one of my 2Es. It's been encoding videos for the past few days (24x slower than my desktop but 3x the efficiency) and it's only at 65C with the help of a small heatsink and fan so no worries there.

 

As for ECC agreed it's a risk but my cluster isn't in production, it's just replacing a HP microserver with something cheaper to run and grow. Here's hoping scrubs pick up any small problems, nodes get kicked out for big problems and off-site incremental backups leave me a good copy somewhere.

Link to comment
Share on other sites

@CampGareth,

 

could you please share some links for fast USB3 <-> SATA/eSATA adapters?

https://www.amazon.co.uk/gp/product/B00PAFDFCU/

 

£3.92 each, shipping's super slow though. I don't doubt you can find equivalent devices on ebay etc.

 

The SATA adapters I'm finding try to power the drive over USB which isn't ideal for me, but if you find some with the same JMS567 chip and a USB 3.0 interface you should be good.

Link to comment
Share on other sites

still doing some more tests.

 

i've switched to an s905 tv box Nexbox A95X, in order to get an idea of possible limitations of the H3 soc.

It's running a custom desktop armbian image (legacy kernel) from bable150, from which i removed all the desktop components. I'll try to use the official armbian ordoid C2 image later (once i get the trick to get rid of that boot.ini).

 

So that box has only two usb ports, two root hosts, i'm using one for the Gbe adapter and a second for an USB3 HDD (not SSD).

For the Gbe i'm getting this time 215Mbps upstream and 245Mbps downstream, which is not too bad i guess.

 

So i did tests running iperf3 up / down, along with 4GB dd read / writes to the HDD.

The good thing is that in any combination the results are very consistent, i never got a test that would deviate of 5Mbps on the network or 2MB/s on the HDD.

 

The network speeds are always as mentioned above, and for the HDD speeds i get a steady 38-40MB/s.

That would pretty much match up any of the tests done on the H3 on this thread.

 

The ondemand governor almost never went up to the full 2GHz and happily stayed at the 1.5/1.75GHz frequencies, with a cpu load barely going over 20% with some spikes at 60% on one core but most probably caused by the monitoring scripts (htop, glances..).

 

So as seen with the H3, the s905 cpu is clearly not a bottleneck and the same kind of nas setup should be achievable.

 

USB2 still feels really weak compared to current day bandwidths (or even poor native sata2), but as long as you don't really expect Gbe speeds to your storage device it should suffice.

 

 

I'm quite interested to see how the Allwinner R40 will perform, it could shuffle the cards again.

My current setup is still using an old A20 running 1 sata HDD, and moved the vpn work to a second device (old s805 tv box) with real Gbe performance..

 

Doing laptop repairs i also got the idea to use old eeepc motherboards that can be found at ridiculous prices (~35e), they are basically the same thing as an arm dev board (with sata), the problem is identifying the ones that do have Gbe. I haven't done much test on those weak atoms cpus so i'm expecting performances issues when it comes to high speed vpn, but for regular storage it should really work fine, we'll see..

Link to comment
Share on other sites

I just purchased JM567 USB-SATA adapter and using Armbian 5.23 Jessie Desktop.

However "dmesg | grep uas" doesn't show anything.

It seems that usb uasp is not enable by default.

Can anybody tell me how to enable uasp ?

 

root@RetrOrangePi:~# modprobe uas

modprobe: FATAL: Module uas not found.

 

strange ... I though armbian 5.23 which has already kernel compiled with CONFIG_USB_UAS=y ???

Link to comment
Share on other sites

  • Igor unpinned this topic
On 3/1/2017 at 9:23 AM, tkaiser said:

 

There's nothing to 'enable' in smelly Allwinner legacy kernel. Drivers had to be written so this will only work with mainline kernel: http://linux-sunxi.org/Sunxi_devices_as_NAS#New_opportunities_with_mainline_kernel

The last days I'm testing (or at least, trying to test) if a Orange Pi Plus 2e is going to replace my NAS based on a Banana Pi M1*.

I've connected the Orange pi to a 5V5A PSU and want to attach two HDD's to it. Without HDD's the Pi boots perfectly fine, trying to boot with the disks connected and it won't boot at all. Do I need to add a second power to the disks? If yes, how? Is a Y-cable the best practice? I could cut the 2nd USB plug (red one) and attach it to the PSU directly.

 

After days of searching and reading I also find this 'UASP magic'.  My current hardware is not compatible with UASP (I would need 2x enclosure and 2x Y-cable). Before I invest in new stuff, UASP only works with the mainline kernel and not with the legacy kernel? I've read that the Ethernet connection with mainline is buggy and that it is advised to use legacy for server applications. Would you advise to buy UASP compatible hardware and switch to the mainline kernel?

 

 

*: on the banana pi I have two HDD's connected, one via sata and one via usb. Because of the current limit on the Micro usb plug I've attacehed the power input of the sata disk to the same power supply as the banana pi.

Link to comment
Share on other sites

3 minutes ago, Alexio said:

Before I invest in new stuff, UASP only works with the mainline kernel and not with the legacy kernel?

Yes, on H3 based boards UAS is available only on the mainline.

 

3 minutes ago, Alexio said:

I've read that the Ethernet connection with mainline is buggy and that it is advised to use legacy for server applications.

Ethernet works fine, but we don't provide supported Armbian images for H3 based boards with the mainline kernel yet.

Link to comment
Share on other sites

24 minutes ago, zador.blood.stained said:

Yes, on H3 based boards UAS is available only on the mainline.

 

Ethernet works fine, but we don't provide supported Armbian images for H3 based boards with the mainline kernel yet.

Thank you for the prompt reply, then I will not invest in UASP hardware for now. 

I've also ordered the Y-cables. Hopefully that solves the 'OPi not booting with HDD's' issue. To be continued...

Link to comment
Share on other sites

@tkaiser Assume I have a pair of 2TB drives.  In typical software mdadm RAID0 config, loss of one drive is total loss of the array.  If using btrfs with metadata in RAID1 and data in RAID0, is loss of one drive total loss of the array? Or after adding a new (third) 2TB drive, can the metadata remaining on the healthy drive rebuild the array on the new drive before failing out the failed drive? 

Link to comment
Share on other sites

12 hours ago, rajprakash said:

If using btrfs with metadata in RAID1 and data in RAID0, is loss of one drive total loss of the array?

Of course. All data gone.

12 hours ago, rajprakash said:

Or after adding a new (third) 2TB drive, can the metadata remaining on the healthy drive rebuild the array on the new drive before failing out the failed drive? 

Nope, it's just metadata (small checksums used to verify data integrity) but no parity data like it's used with RAID5/6 and of course you can't reconstruct data from this type of metadata.

 

On H3/H5 boards that are limited by USB2 an interesting alternative could be a mdraid RAID10 with far layout using 2 disks (you don't need 4 for RAID10 and you should always use RAID10 if you wanted to use RAID1 on Linux in the meantime). This way you get full redundancy and protection against a drive fail while doubling at least read performance. This is an OPi Plus 2E with 2 SSDs in UAS capable JMS567 disk enclosures using a RAID10 with rather small chunk size (default ist 512K) and a btrfs on top:

mdadm --create /dev/md127 --verbose --level=10 --raid-devices=2 --layout=f2 --chunk=4 /dev/sda /dev/sdc

RAID-10 made of Intel 540S and Samsung PM851 with 4k chunk size:
                                                              random    random
              kB  reclen    write  rewrite    read    reread    read     write
          102400       4     7044     7549     8654     8824     7058     7011
          102400      16    14815    14508    25361    25344    21222    14731
          102400     128    26835    26558    46249    47619    44310    26084
          102400     512    26882    29306    51175    51847    51979    28552
          102400    1024    29347    29960    51424    52177    52287    29056
          102400   16384    30129    30715    74614    77384    78568    29954
         4096000       4    36556    35967    81080    82749
         4096000    1024    36335    35705    81314    81422

 

Link to comment
Share on other sites

8 hours ago, tkaiser said:

...Nope, it's just metadata (small checksums used to verify data integrity) but no parity data like it's used with RAID5/6 and of course you can't reconstruct data from this type of metadata....

 

On H3/H5 boards that are limited by USB2 an interesting alternative could be a mdraid RAID10 with far layout using 2 disks (you don't need 4 for RAID10 and you should always use RAID10 if you wanted to use RAID1 on Linux in the meantime). This way you get full redundancy and protection against a drive fail while doubling at least read performance. This is an OPi Plus 2E with 2 SSDs in UAS capable JMS567 disk enclosures using a RAID10 with rather small chunk size (default ist 512K) and a btrfs on top:

 

 

Understood.  I mistook the metadata for parity.  Thanks for clarifying.

 

The RAID10f2 is an interesting concept.  In my use case, read operations far outnumbers write operations, so it may be the fault tolerant yet speedy option I didn't know existed :) 

Link to comment
Share on other sites

1 hour ago, rajprakash said:

The RAID10f2 is an interesting concept.  In my use case, read operations far outnumbers write operations, so it may be the fault tolerant yet speedy option I didn't know existed

 

The nice thing when used this RAID variant in NAS situations on GbE equipped H3/H5 boards is that the larger the available DRAM the less you realize that IO write speeds are limited to below 40MB/s since all NAS writes end up in filesystem buffers first and will later be flushed to disk. So with an OPi Plus 2E or OPi Prime you can transfer up to 1.5GB of data at +70MB/s and only then it slows down to ~35MB/s. On the boards with less memory this can happen already after just a few hundred MB written.

Link to comment
Share on other sites

On 10/23/2017 at 4:58 AM, tkaiser said:

 


mdadm --create /dev/md127 --verbose --level=10 --raid-devices=2 --layout=f2 --chunk=4 /dev/sda /dev/sdc

 

In your test, did you just simply build a btrfs fs like : mkfs.btrfs -m single /dev/md127 -d single /dev/md127 ?

Link to comment
Share on other sites

I thought I'd share some links to JMS578 based enclosures I've found:

 

ACASIS FA-08US - cheapest on ebay but also on ali https://www.aliexpress.com/item/Acasis-fa08us-2-5-inch-usb3-0-aluminum-external-hard-drive-disk-SATA-SSD-mechanical-solid/32818921168.html

Metal enclosure with plastic tray, has 4 mounting holes to secure the drive to the tray. Firmware Version: v0.5.0.8

 

CHIPAL clear enclosure - Firmware Version: v133.2.0.2. This enclosure gave me errors after running fstrim with an ADATA SU650 but not with another SSD I have. It works fine with discard. The SU650 + fstrim works fine in the ACASIS enclosure.

 

JZYuan adapter - according the pictures on aliexpress it has JMS578 but I've not personally tested it

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines