Jump to content

Best budget device as torrent box?


Guest TFR

Recommended Posts

i know the topic started like low power torrent box but I am talking about a this 

 

If possible, I'd like to do it all on one device (torrent downloader/amazon upload/media player). This works reasonably on the raspberry, but you are limited with your downloadspeed, and playing+downloading at the same time doesn't work well.

 

regarding the IRQ/ I/O scheduler i am taking about the suggestion from http://linux-sunxi.org/SATAspecially this part 

 

Performance

This section is work in progress

It has been tested to work up to 60MB/s. How, and with what SoC / device combination?

SATA throughput is unbalanced for unknown reasons: With appropriate cpufreq settings it's possible to get sequential read speeds of +200 MB/s while write speeds retain at approx. 45 MB/s. This might be caused by buggy silicone or driver problems.

Unlike other platforms sequential SATA transfer rates on A10/A20 scale somewhat linearly with both cpufreq settings and DRAM clock. In case you use the wrong cpufreq settings it's impossible to achieve maximum SATA performance (eg. using the ondemand governor without io_is_busysetting).

On the dual-core A20 setting both CONFIG_SCHED_MC=y and CONFIG_SCHED_SMT=y at kernel compile time seems to increase SATA throughput (sequential reads +10 MB/s).

Also worth a look are Linux' I/O schedulers. If your SATA disk is available as /dev/sda you can query /sys/block/sda/queue/scheduler to get the list of available I/O schedulers (the active printed in brackets) and change the scheduler either globally by supplying elevator=deadline to bootargsenvironment or on a per device basis using echo deadline >/sys/block/sdN/queue/scheduler (deadline seems to be the most performant scheduler on A10/A20)

Since irqbalancing isn't working on sunxi/ARM one way to get better SATA throughput on A20 devices is to assign all AHCI/SATA IRQs to the second CPU core usingecho 2 >/proc/irq/$(awk -F":" '/ahci/ {print $1}' </proc/interrupts)/smp_affinity

 

 

the I/O scheduler i changed for a mechanical HDD like you said in /etc/init.d/armbianhwinfo but is set to cfg and not deadline and the IRQ's for the HDD are not sent to the second core only the IRQ from the nic is

 

@Baos - my use case is a bit different than yours, in my case the cubietruck downloads content and serves it via NFS to 2-3 clients and so far i have no problem with the read speeds (i don't really know any value for them) the "problem" i have is with write speeds, but write speeds coming from programs like deluged/transmission

 

 

Link to comment
Share on other sites

I am having a real hard time comprehending why the USB stick is such a bottleneck. 

 

We added now a simple method to check this stuff in the next release. You can then simply let armbianmonitor check a device by doing an '/armbianmonitor -c /mountpoint'. Looks then like this for an older 512MB TF card lying around:

 

The results from testing /dev/sda1 (vfat):

  Data OK: 456.10 MB (934096 sectors)

Data LOST: 0.00 Byte (0 sectors)

Average writing speed: 3.11 MB/s

Average reading speed: 10.03 MB/s

                                            random    random

reclen    write  rewrite    read    reread    read     write

     4     1735     1744     4560     4560     4553       58                                                          

   512     4613     4814    10247    10194    10248     2570                                                          

 16384     4718     4960    10411    10340    10410     4847                                                          

 

Health summary: OK

 

Performance summary:

Sequential reading speed: 10.03 MB/s 

 4K random reading speed:  4553 KB/s 

Sequential writing speed:  3.11 MB/s (too low)

 4K random writing speed:    58 KB/s (way too low)

 

The device you tested seems to perform too slow to be used with Armbian.

This applies especially to desktop images where slow storage is responsible

for sluggish behaviour. If you want to have fun with your device do NOT use

this media to put the OS image or the user homedirs on.

 

To interpret the results above correctly or search for better storage

alternatives please refer to http://oss.digirati.com.br/f3/and also

http://www.jeffgeerling.com/blogs/jeff-geerling/raspberry-pi-microsd-card

and http://thewirecutter.com/reviews/best-microsd-card/

 

Since armbianmonitor is just a simple script anyone can already upgrade now:

curl "https://raw.githubusercontent.com/igorpecovnik/lib/master/scripts/armbianmonitor/armbianmonitor" >/usr/local/bin/armbianmonitor
apt-get install f3
Link to comment
Share on other sites

I wonder if it makes a difference that mine is formated ext4. 

 

Maybe it might make more of a difference that you 3 guys are constantly measuring completely different things where the respective bottlenecks are always different so that it's not possible to put any of the numbers you're throwing around into any relationship.

 

You tested FTP, most probably with single large files. So you tested both network throughput (rather limited in one direction), sequential storage speeds (also limited both by the interface and the disk in question). Where the 30 MB/s in one direction originate from might have different reasons: A20's SATA write bandwidth is limited to 40-45 MB/s and using HDDs with 5400 rpm without knowing how full they are (when they're empty sequential transfer speeds are typically twice as high) or in which state (heavy fragmentation) is pretty useless.

 

The 10MB/s Vlad's talking about when using a torrent downloader are most likely no storage limitation but instead one of the application in question or the Internet download speed (pretty useless to tweak I/O schedulers if it's just about not enough data available to be written to the disk).

 

When one starts to test all this stuff individually then you get an idea where bottlenecks are present (quite a few of them on any sunxi device). And then you get also an idea how to improve performance if it's necessary. Everything outlined in detail already: http://linux-sunxi.org/Sunxi_devices_as_NAS#Influence_of_the_chosen_OS_image_on_NAS_performance

 

And as the result of a few hours of work now storage testing made easy: 'armbianmonitor -c' exists, see link above.

Link to comment
Share on other sites

Maybe it might make more of a difference that you 3 guys are constantly measuring completely different things where the respective bottlenecks are always different so that it's not possible to put any of the numbers you're throwing around into any relationship.

not particularity correct we are all measuring some kind of performance for our use case, i couldn't give a fuck if dd or bonnie is able to write with +100MB in a synthetic test if the applications i use can't do that.

   

the 10MB/s i have reported earlier could be improved, i go almost 20MB (using deluge-ltconfig plugin) but with both cores at 100% and with 25-30% ram utilization durring the download process (the ram ustilization its almost constat because p2p uses a lot of ram), when i am doing this tests i am always using torrents well seeded with majority of the seeders from the same country and here is a speedtest for my home connection which i don't think is a bottlenek

vlad@storage:~$ speedtest
Retrieving speedtest.net configuration...
Retrieving speedtest.net server list...
Testing from RCS & RDS (79.115.187.92)...
Selecting best server based on latency...
Hosted by NextGen Communications (Bucharest) [7.66 km]: 7.018 ms
Testing download speed........................................
Download: 288.51 Mbit/s
Testing upload speed..................................................
Upload: 97.15 Mbit/s

 

@baos it will just test reads and write speeds i don't think you need to worry about data loosing 

Link to comment
Share on other sites

i couldn't give a fuck if dd or bonnie is able to write with +100MB in a synthetic test if the applications i use can't do that. [...]  with both cores at 100%

 

Great, this is the first time we're speaking about the 'active benchmarking' approach. You talked all the time about storage performance and now you tell us the real bottleneck: CPU utilisation. Your dual core A20 is already too busy with this torrent stuff to be able to write faster to disk. So in reality it's not about storage performance at all but your application being CPU bound.

 

Using approriate tools (dd and bonnie aren't such as I already tried to explain in THIS thread here) you can check network and storage performance individually and then simply choose a more appropriate device. In your case you might choose an SBC using more/faster CPU cores still with GbE since Fast Ethernet would be a bottleneck already and it seems USB storage speed might be sufficient (to be confirmed by appropriate active benchmarking and identifying how your application writes to disk).

 

In other words: by choosing a device that shows less storage performance but more processing power (eg. an ODROID C1+ or an Orange Pi Plus) you might be able to improve your application's peformance by factor 2.

Link to comment
Share on other sites

BTW: I chose the Orange Pi Plus as an example for a reason. Since this is a device where everyone will get the idea that it's necessary to test stuff individually.

 

When you connect a SATA disk to the Orange Pi Plus which seems the most reasonable choice you probably won't see any performance improvement compared to CubieTruck now. If you use the very same SATA disk, put it in an external enclosure and connect it through USB then performance will improve. Why? Is SATA and USB different on the Orange Pi Plus? No, it's just the H3 SoC features no SATA and the SATA port on the OPi+ has been implemented using the slowest USB-to-SATA bridge in the world (GL830) which isn't able to exceed 15 MB/s write speeds and will be outperformed by any external USB disk not relying on GL830.

 

So refusing to 'benchmark correctly' will just lead to another frustrating situation and a device you might throw into the bin. Measuring all the stuff individually you're able to identify the real bottlenecks and improve the situation.

 

Will I loose any data using that? 

 

In case you asked me: Sure, you can always loose data as long as you're not doing backups :P

 

Really, running any sort of real benchmark might put significantly more load on a device which might lead to stability problems caused by an insufficient PSU for example. The 'armbianmonitor -c' check was designed to verify the speed/reliability of SD cards and thumb drives (and no, the tests do not destroy anything, just try to fill the storage in question with test patterns that are verified later) but being used with a disk and insufficient power supply they might lead to data/filesystem corruption.

 

But that's what benchmarks are for (when used in an active approach), identifying bottlenecks be it performance or reliability relevant. In case anyone's interested: We used the linpack high performance benchmark to identify reliability issues caused by undervoltage settings with Pine64 in the past: https://github.com/longsleep/build-pine64-image/pull/3#issuecomment-196399188

Link to comment
Share on other sites

I tried the test, 7 hours later it was still writing files.  I closed the test and turned transmission back on.

 

Congratulations for not being able to understand what a test that tries to check both performance and health/reliability is about: Testing the whole free space which might take about 15 hours when used with an empty 2TB disk on a Cubietruck since A20 won't exceed ~40 MB/s.

 

If you want to speed things up either test only performance with iozone or allocate free space before so the test finishes more quickly (df -h, use fallocate then to produce a single large file)

Link to comment
Share on other sites

I will not be running a 15 hour test.  And It's now a 3tb drive.  I would be willing to do .5 of a tb at most.  I don't think one needs to fill the harddrive to run a decent test.  Also the only stat I actually care about is how fast I can transfer files off it via an ftp client or any other, smb, nfs, etc.(75mb/s max on armbian, 20-30mb/s using netbsd)  Last year I lived out in the country and my max speed most of the time was 0.3kps.   I'm in the city now, when I get 10mb/s download it's great. 

Link to comment
Share on other sites

Wow - you probably used up your budgets already discussing wild testing scenarios.

I would never ever buy the "best" solution, I would just opt for something workable and

do some very basic tests ( testing ONE thing at a time, as tkaiser constantly reminds you  ;)  )

 

What do I get from an OrangePiOne torrent box running stock Armbian_5.05 ?

 

- 4 cores sanely volted+clocked  (running without heatsink) -> enough horsepower for processing

- Fast ethernet 100 with tested 95mbps ->  10 MB/s data exchange

- SATA-HDD on USB2 via cheap adapter -> 30/30 MB/s data write/read   ( single-powered directly from port )

   ( USB2-Stick on USB2 via OTG-adapter ->  5/10 MB/s data write/read )

   ( USB3-Stick on USB2 via OTG-adapter -> 12/25 MB/s data write/read )

 

Assuming you are hooking this up to a home network with limited uplink to the Internet, this setup

will suffice at an unbeatable price.

Link to comment
Share on other sites

Sorry for dropping out of this topic, but I didn't have any time to play around some more. 

In my case, the USB stick was the bottleneck after all (tkaiser won't like this, but the random write test of bonnie did indeed show that is was very slow on those kind of writes), and an SSD in a USB enclosure did the trick. 

 

I have wasted enough time on this project, and since my requirements are met, I can't be bothered to do any more testing. Thank you for all the help though, especially you, tkaiser!

I'll probably have a go at the new tool you've implemented though  :)

Link to comment
Share on other sites

the USB stick was the bottleneck after all (tkaiser won't like this, but the random write test of bonnie did indeed show that is was very slow on those kind of writes), and an SSD in a USB enclosure did the trick. 

 

Yes, it was quite obvious that you had a problem with random writes. And the following is not for you but for anybody else stumbling accross this thread.

 

NEVER use bonnie/bonnie++, dd or hdparm to measure storage performance with Armbian.

 

We ship with iozone for a reason since iozone testing is more reliable and especially results are easier comparable.

 

Do a chdir to the directory in question and then do a

iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2

to get a quick overview how your storage behaves and think about running the 3 sets of tests outlined here if you're interested in maximum sequential transfer speeds.

 

The most important rule when benchmarking: Don't trust in the numbers you get since they are wrong most of the times. But there are tools that prevent you from measuring wrong numbers a bit better than others but none is perfect. A short list of really incapable 'benchmark' tools (unfortunately these are the 3 most recommended amongst SBC users):

  • hdparm: only a weird joke since the 'benchmark' runs just a few seconds and chances that the results are influenced by a short write burst in the meantime are 50% or higher. If used as often recommended (-tT) then you're tampering disk throughput with memory bandwidth also which makes the results even more meaningless
  • dd: most often used with the wrong flags, therefore not testing disk but RAM (caches/buffers) instead, only testing sequential speeds and mostly with absolutely irrelevant block sizes that tell you nothing about 'real world' performance
  • bonnie/bonnie++: They test something but definitely not what they claim to do. Bonnie(++) output is interesting if you're an engineer to tune your operating system or to learn how compiler switches might influence benchmarks but it's nothing for end-users

Same applies to 'mixed' tests that also invoke high CPU utilisation (scp test results vary depending on the strength of negotiated SSH ciphers on weak ARM boards, so you're testing the speed of encryption algorithms and not storage/network most probably) or tamper disk throughput with network throughput (FTP, NFS, SMB, whatever). These tests are useful to get the idea how your application might behave but they can't help you identify the bottlenecks or improve the situation.

Link to comment
Share on other sites

Hello. I found this thread when I was searching for the best option for a low-powered torrent box. I signed up so that I can share the following:

 

I was using a NAS with a 88F5182 CPU and 128 MB of RAM, modded with Transmission, and it could download at max. speeds of around 2+ MB/s, but would crash once in a while. Now, it's malfunctioning (it's around 5 years old), so I had to pry it open to take out the HD (replaced more than once; the current one was manufactured in 2013, and from an old laptop) and test it. (The process for installing and prepping a new HD is tedious.)

 

When I tried getting the same torrent using the PC (i5 with 8 GB RAM) attached to the same router, etc.), I got speeds of more than 5 Mb/s with no crashes.

 

Given that, I'm guessing that the best torrent boxes require enough CPU and memory to download at high speeds. If that's the case, then it will require more electricity, and might be something like a cheap laptop, NUC, used x86 desktop, etc. Another alternative is a seedbox with a monthly rate.

 

Link to comment
Share on other sites

18 hours ago, danzig said:

I was using a NAS with a 88F5182 CPU and 128 MB of RAM, modded with Transmission, and it could download at max. speeds of around 2+ MB/s, but would crash once in a while.

 

 

I use an Odroid HC1 with a 2.5" drive connected, it runs both Transmission and Deluge.  The download speeds are limited to my internet connection and peers (CPU usage never an issue, have seen 30MB/s before)

 

Never crashed before.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines