• Posts

  • Joined

  • Last visited

TFR's Achievements

  1. Sorry for dropping out of this topic, but I didn't have any time to play around some more. In my case, the USB stick was the bottleneck after all (tkaiser won't like this, but the random write test of bonnie did indeed show that is was very slow on those kind of writes), and an SSD in a USB enclosure did the trick. I have wasted enough time on this project, and since my requirements are met, I can't be bothered to do any more testing. Thank you for all the help though, especially you, tkaiser! I'll probably have a go at the new tool you've implemented though
  2. Is it? Or does this not matter and just means it collects x MB of random writes and starts writing them at that point? I tried rTorrent, and on first sight, it seemed to perform better. I'll install a webgui tonight so it's easier to keep track of it. I have read up a bit about your active benchmarking, but it seemed relatively hard to set up and interpret the output. It was easier to just try with an SSD
  3. I tried the SSD on a raspberry pi (2) too, and it seems that it was also the USB drive that was bottlenecking everything. I also averaged 4.5MB/s on the raspberry. I am having a real hard time comprehending why the USB stick is such a bottleneck. I tried writing a 15.5GB file (8gb ram in the system) to it in Windows (formatted as NTFS), which took 9min 29s. Or nearly 30 MB/s. The stick was directly unplugged after the copy was done, To verify the copy was indeed successful, I calculated a hash from the source file and the file on the stick. So why, even with a big cache, is the performance so bad with transmission? 15 seconds to download 60MB, 2-3 seconds to write it to the stick. That is if it's sequential. Must be missing something :/ It's a Kingston DT Rubber 64GB by the way, first generation.
  4. Linux orangepipc 3.4.110-sun8i #18 SMP PREEMPT Tue Mar 8 20:03:32 CET 2016 armv7l GNU/Linux I stopped transmission and set the cache to 64MB (plenty of ram,250 only 100-150 MB was used) and then started again. The writing to usb is the problem. If I set the cache to 800MB, it goes fast until the ram is full and then hangs (average speed about 4MB/s or so). 1 core is on 100%, one on 46, and 2 on 10-15%. It's the whole device that hangs. Starting a new ssh session hangs, logging on on the local console hangs, htop doesn't do anything anymore. It's freezed for 5 minutes now or so. Pretty much the same problem as on the raspberry. I'm gonna try with an SSD in a docking station now EDIT: It does seem to be the USB drive. With the SSD in a dock, I get nearly 5MB/s measured over 10 minutes. If someone could help me figure out why there's such a big difference, so I know what to check for when buying a USB drive, that would be great!
  5. Wget at work got 7,03 MB/s from the same site. Pretty sure network is not the issue The usb-drive is formatted in ext4. I am not sure how to check the disk access pattern of transmission? How should htop be sorted to show the relevant information? I have a Kingston DataTraveler Workspace (licensed WinToGo stick, 150 euro or something like that for 64GB). Waiting for some more instructions, I'll give that a go. At least I'll be sure it's not the stick then. I also tried setting vm.swappiness=0 and vm.min_free_kbytes=0 in /ect/sysctl.conf (someone mentioned it fixed the kswapped issue for him), rebooted and tried again, but the problem was the same. I am not that familiar with Linux and all the command line tools, as you might have noticed
  6. Made an account Did some benchmarking! wget --output-document=/dev/null Averaged 2.23 MB/s, pretty much the max of my home connection. Will try at work tomorrow (got 350mbps down on there) dd: orangepi@orangepipc:~$ time sh -c "dd oflag=direct if=/dev/zero of=/media/usb/tmp bs=4K count=10000 && sync" 10000+0 records in 10000+0 records out 40960000 bytes (41 MB) copied, 26.2403 s, 1.6 MB/s real 0m28.506s user 0m0.000s sys 0m0.950s orangepi@orangepipc:~$ time sh -c "dd oflag=direct if=/dev/zero of=/media/usb/tmp bs=1M count=100 && sync" 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 5.26363 s, 19.9 MB/s real 0m5.337s user 0m0.000s sys 0m0.170s I did longer runs first, but my pc crashed when I was posting the result. Speeds were the same. Bonnie++: Version 1.97 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP orangepipc 2G 112 98 18230 12 12774 8 540 99 49408 12 96.2 3 Latency 90441us 1195ms 1196ms 17282us 10268us 1188ms Version 1.97 ------Sequential Create------ --------Random Create-------- orangepipc -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 2664 18 +++++ +++ 11275 59 9487 63 +++++ +++ 7905 42 Latency 755us 1947us 2037us 822us 135us 1733us 1.97,1.97,orangepipc,1,1458080683,2G,,112,98,18230,12,12774,8,540,99,49408,12,96.2,3,16,,,,,2664,18,+++++,+++,11275,59,9487,63,+++++,+++,7905,42,90441us,1195ms,1196ms,17282us,10268us,1188ms,755us,1947us,2037us,822us,135us,1733us