Jump to content

Recommended Posts

Posted
7 hours ago, valant said:

I am wondering (since I have no possibility to check this directly), how these bridges represent themselves to the host - more specifically - do they expose themselves as AHCI/SATA controllers, the same way as their x86 brothers sitting on PCIe do?

Example from lshw for an UAS capable USB to SATA bridge:

            *-usb
                   description: Mass storage device
                   product: USB to ATA/ATAPI Bridge
                   vendor: JMicron
                   physical id: 2
                   bus info: usb@1:2
                   logical name: scsi5
                   version: 81.05
                   serial: [REMOVED]
                   capabilities: usb-2.10 scsi
                   configuration: driver=uas maxpower=500mA speed=480Mbit/s

 

Usually if it doesn't have any bugs (like some GL* controllers), passes trough SMART data and SCSI commands and has UAS capabilities (not blacklisted anywhere), it's good enough and who cares about how exactly it is represented.

 

7 hours ago, valant said:

Everybody is talking about UAS here, so probably they don't expose their SATA nature (more then that SAT thing), but I am a total ignorant with respect to USB internals yet, so I might be wrong presuming that if they exposed themselves as SATA controllers sitting on a USB bus, there were no need in UAS with them. If they don't act as pci-ide, ahci x86 counterparts, then why? why a SATA controller put on the SoC interconnect or PCIe bus can be a "native SATA", but not when put on USB?

AFAIK their nature should be interpreted as "devices that implement (a subset of) SCSI". USB BOT hides that nature under the "block device" abstraction, and UAS uses the SCSI command set. I think you can make a controller that would bypass SATA and instead of "storage->SATA->SCSI over USB/UAS" you would get "storage->SCSI over USB/UAS", so exposing their SATA nature is not mandatory.

Posted
On 21.8.2017 at 4:26 PM, tkaiser said:

to get a more realistic idea about the AES encryption potential when more than one CPU core is involved I would suggest running:


tk@pinebook:~$ cat check-ssl-speed.sh
#!/bin/bash
while true; do
	for i in 0 1 2 3 ; do 
		openssl speed -elapsed -evp aes-256-cbc 2>/dev/null &
	done
	wait
done
tk@pinebook:~$ ./check-ssl-speed.sh | grep "^aes-256-cbc"

 

 

I repeated this test yesterday after I found why my H5 openssl benchmark scores were that low: since I was testing there with an armhf userland and not an arm64 one (32-bit vs. 64-bit but in this case more importantly: compiler switches forbid the use of ARMv8 crypto extensions when building armhf binaries).

 

I also set up my 'monitoring PSU' stuff (letting a Banana Pro provide power to ROCK64 to be able to measure consumption without PSU being involved) and get the following consumption/performance numbers (on a 4GB pre-production sample -- there are some significant changes on production boards so idle / Ethernet consumption might differ!):

 

Bildschirmfoto%202017-08-27%20um%2010.43

  • idle @ 408 MHz / legacy kernel: 1450 mW
  • When switching GbE to Fast Ethernet: 1100 mW (the usual 350mW drop)
  • after poweroff: 580 mW (so there's currently some stuff missing and the board does not really power off cleanly)

 

Testing with armhf userland walking through 408, 816 and 1200 MHz DVFS operating points:

armhf  408 MHz:    52,000k / 1360mW --> 260 mW
armhf  816 MHz:   104,000k / 1710mW --> 610 mW
armhf 1200 MHz:   153,000k / 2790mW --> 1690 mW

Now with arm64 (and Debian Stretch and different idle consumption since Gigabit is enabled):

arm64  408 MHz:   730,000k / 1730mW --> 280 mW
arm64  816 MHz: 1,508,000k / 2170mW --> 720 mW
arm64 1200 MHz: 2,200,000k / 3340mW --> 1890 mW

With ARMv8 crypto extensions active we see a slight increase in consumption (and temperature -- with armhf temp maxed out at 76"C at 1200 MHz while the arm64 tests triggered some throttling after 6 minutes since no heatsink applied to RK3328 and SoC temp exceeded 80°C at an ambient temp of 26°C). But the performance increase is massive (14 times faster).

Posted

And another update, this time on USB3 performance. Since today a small M.2 SSD arrived I immediately tested with native SATA on a Clearfog Pro and I also gave it a try in a tiny JMS578 enclosure now with ROCK64 (details for both SSD and enclosure).

 

Storage performance measured with the usual iozone call with SATA (to show the SSD's performance) and then attached to ROCK64's USB3 port, first with BSP kernel (4.4.77), then mainline 4.13-rc1:

TS120GMTS420 SATA Clearfog                                    random    random
              kB  reclen    write  rewrite    read    reread    read     write
          102400       4    86865   124684   140973   140224    32522   101844
          102400      16   206105   251838   293776   282729    84623   202503
          102400     512   363707   377155   409380   401946   358966   372971
          102400    1024   364578   358095   402539   404249   387812   376067
          102400   16384   310808   407947   500231   497107   470879   398703
         3072000   16384   420958   403649   509368   507120   469027   405752

TS120GMTS420 JMS578 ROCK64 4.4.77 1296 MHz                    random    random
              kB  reclen    write  rewrite    read    reread    read     write
          102400       4    25793    31181    36545    38058    19837    32840
          102400      16    74689    87314   127011   127494    62706    88478
          102400     512   298948   322459   323666   325943   304947   311285
          102400    1024   333159   343551   348576   351217   329950   344678
          102400   16384   369142   377884   377184   380461   362302   359235
         3072000   16384   374446   380254   382541   381844   363431   379725
         
TS120GMTS420 JMS578 ROCK64 4.13.0-rc1                         random    random
              kB  reclen    write  rewrite    read    reread    read     write
          102400       4    18098    22215    22757    22750    17104    21932
          102400      16    61487    73079    75408    75522    56182    73369
          102400     512   287554   299840   281735   283843   283678   300074
          102400    1024   299159   314982   295748   297612   297422   305827
          102400   16384   249205   357761   341508   343535   343730   346455
         3072000   16384   350303   346587   346114   346187   346233   341654

As expected USB3 performance numbers lower compared to native SATA on the Clearfog Pro. But why do the mainline kernel results look that bad? Since no cpufreq scaling is working here and we're running the SoC with just 600 MHz set by u-boot. Now let's test again with BSP kernel limiting here cpufreq also to 600 MHz instead of 1296 MHz:

TS120GMTS420 JMS578 ROCK64 4.4.77 600 MHz                     random    random
              kB  reclen    write  rewrite    read    reread    read     write
          102400       4    15937    21334    25571    25576    17430    21122
          102400      16    53353    65892    86114    85993    50773    65037
          102400     512   273837   288669   276365   278220   261956   290194
          102400    1024   296038   321502   309817   311969   293723   321893
          102400   16384   363437   374665   344870   350489   345225   372353
         3072000   16384   357195   367268   364944   365021   344555   367027

Now the above 4.13 numbers look a lot better! :)

Posted (edited)
On 8/22/2017 at 1:23 AM, tkaiser said:

Hmm... to summarize the 'OpenSSL 1.0.2g  1 Mar 2016' results for the 3 boards/SoC tested above with some more numbers added (on all A53 cores with crypto extensions enabled performance is directly proportional to CPU clockspeeds -- nice):

 

 

ROCK64 @ 1.488GHz

type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes
aes-128-cbc     179760.41k   500823.06k   837207.72k  1040921.94k  1120367.96k
aes-192-cbc     166930.03k   430311.10k   668937.47k   795866.79k   842443.43k
aes-256-cbc     160133.98k   390004.61k   572388.78k   662836.57k   694777.17k

The little board that could has broken 1GB/s

 

Edit: According to this it should go all the way up to 1.5GHz without too much extra in the way of Vcore

Edited by chrisf
Posted
On 30.11.2017 at 3:50 AM, chrisf said:

Edit: According to this it should go all the way up to 1.5GHz without too much extra in the way of Vcore

 

Adding unreliable overclocking settings on Allwinner and Rockchip devices is unfortunately pretty easy (and the proposed patch looks wrong already in the first line since increasing cpufreq by 100 MHz without adjusting the voltage at all). That's why stuff like https://github.com/ehoutsma/StabilityTester exists. Feel free to try it out, @ErwinH used the tool to develop a sane DVFS OPP table last year for Orange Pi PC2 with this (since we found out that this highly optimized Linpack is a great way to check for DVFS undervoltage caused instabilities already before the board freezes/crashes. That lesson was learned when dealing with unreliable overclocking on Pine64 before)

Posted
11 hours ago, tkaiser said:

the proposed patch looks wrong already in the first line since increasing cpufreq by 100 MHz without adjusting the voltage at all

 

Completely wrong it turns out.

Turns out the RK805 PMIC needs the voltage increased in 12.5mV steps and the frequency needs to be specific values too, they're defined in drivers/clk/rockchip/clk-rk3328.c.

They're defined up to 1.8GHz, but I didn't get any above 1.488 to work, cpufreq wouldn't load them.

It does run stable at 1.488GHz though, if you increase the voltage. I'm using 1.4375V, haven't tested any lower.

Done testing stability:
Frequency: 1392000      Voltage: 1350000        Success: 1      Result:        0.0048034
Frequency: 1416000      Voltage: 1375000        Success: 1      Result:        0.0048034
Frequency: 1488000      Voltage: 1437500        Success: 1      Result:        0.0048034

I had to edit the script for min, max and cooldown freq as well as the Vcore regulator

Peaks around 78 degrees, with idle @408MHz 40 to 45C

 

minerd --benchmark reports 4.61khash/s (or 4.58 while armbianmonitor -M is running)

after a few minutes it's sitting around 79C

Posted

Would it be possible to get someone to confirm a current "dev" image build is bootable?  I'm not getting any UART output with SD inserted, my eMMC (my fault) at least ells me I broke it's contents.

 

[edit]  OK, the downloadable 4.4 boots, I'm building an unmodified Dev image (I was using a different kernel, shouldn't have broken u-boot though) to verify.

Posted

I think I found it, although I'm not sure why this would happen:

ccache: error: Could not find compiler "aarch64-linux-gnu-gcc" in PATH
ccache: error: Could not find compiler "aarch64-linux-gnu-gcc" in PATH

when compiling u-boot, so that build fails but doesn't crash the script, allowing you to build the image.

 

well then, that didn't repeat on a second try, stay posted.

 

 

 

Posted

Can You test docker container on rock64 on different storage - usb3, microsd, usb2 and ramdisk?

 

Primary test setup will be - install docker, setup ssl certs, use nginx with phpfpm docker ready to go container, run some php scripts with possible memcache access. 

 

Why docker? Because it is easy to pass storage path for test without affecting host os for needed software. And later to scale everything using GbE :)

 

Test area - real life test bench on ssl decr/encr, storage and ram perf and access timings. Some sort of php math calculations or use openssl library to load cpu cores. 

  • Guest changed the title to ROCK64 *This text is also editable by non-experienced users*
Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines