Jump to content

Trying to compile Pine H64


Recommended Posts

Just now, tkaiser said:

 

Already patched and benchmarking. My board seems to be fine with 1800 MHz @ 1080mV :)

I suggest you check your board's speed bin.

 

According to the source code, it's at the 0x1c offset of the SID, bit 5-7 (counted from LSB).

 

0b011 is best, 0b010 is middle (these use the lower voltage DVFS table, with 1800MHz @ 1100mV), 0b001 is worst (this uses the higer voltage table, 1800MHz @ 1160mV).

 

Here's a FEL command to calculate the speed bin (on bash):

```

echo $((($(sunxi-fel readl 0x0300621c) >> 5) & 0x7))

```

 

On my early Pine H64 model A sample and my early (current) model B sample (both with ID "H7309BA 6842" printed on the SoC) they're bin 1, and my later Pine H64 model A sample ("H6448BA 7782") it's bin 3.

Link to comment
Share on other sites

52 minutes ago, Icenowy said:

Here's a FEL command to calculate the speed bin

 

No FEL setup here currently. Is there an dev2mem alternative?

 

Anyway: Just added PineH64 with 1800 MHz to the list: https://github.com/ThomasKaiser/sbc-bench/blob/master/Results.md

 

Quick look at USB3: When attaching my EVO840 SSD in UAS capable disk enclosures (JMS567 and then ASM1153) I get these messages.

[ 1890.116983] xhci-hcd xhci-hcd.0.auto: ERROR Transfer event for unknown stream ring slot 1 ep 6
[ 1890.116987] xhci-hcd xhci-hcd.0.auto: @00000000b8048580 00000000 00000000 1b000000 01078001
...
[ 2444.903488] xhci-hcd xhci-hcd.0.auto: ERROR Transfer event for unknown stream ring slot 1 ep 2
[ 2444.903493] xhci-hcd xhci-hcd.0.auto: @00000000b8048b20 00000000 00000000 1b000000 01038000

UAS storage performance as follows (forget about the ASM1153 numbers, this enclosure can not be powered externally, doesn't get enough juice my PineH64's USB3 port -- I have an early developer sample 'Model A' -- and therefore chooses a strategy to deal with underpowering --> lowering performance)

EVO 840 / JMS567                                              random    random
              kB  reclen    write  rewrite    read    reread    read     write
          102400       4     6090     6479     6300     6358     6403     6508
          102400      16    24386    24449    24493    24570    24570    24445
          102400     512    72744    72826   253654   261201   260663    72693
          102400    1024    73473    73695   302101   314144   313409    74325
          102400   16384    73443    77008   336140   350131   348980    76653
          102400   16384    73768    77008   336151   350292

EVO 840 / ASM1153 (under)powered by PineH64                   random    random
              kB  reclen    write  rewrite    read    reread    read     write
          102400       4     6464     6907     6427     6424     6493     6828
          102400      16    24434    24532    24956    25071    25039    24482
          102400     512    70552    70668    87827    88720    88691    70667
          102400    1024    70528    70713    92226    93154    93183    70684
          102400   16384    69263    72237    96030    96996    97009    72169
          102400   16384    69044    71879    96031    97134

Read performance isn't bad but write is too low. I remember @Xalius reported the same when playing with H6 already months ago. I also tested with the RTL8153 dongle that is known to saturate link speed (940 Mbits/sec). Too early since only ~200 Mbits/sec in both directions.

 

Onboard GbE looks promising (though there's a few Mbits/sec missing in TX direction):

bash-3.2# iperf3 -c 192.168.83.65
Connecting to host 192.168.83.65, port 5201
[  4] local 192.168.83.64 port 60382 connected to 192.168.83.65 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   113 MBytes   948 Mbits/sec                  
[  4]   1.00-2.00   sec   112 MBytes   940 Mbits/sec                  
[  4]   2.00-3.00   sec   112 MBytes   940 Mbits/sec                  
[  4]   3.00-4.00   sec   112 MBytes   940 Mbits/sec                  
[  4]   4.00-5.00   sec   112 MBytes   940 Mbits/sec                  
[  4]   5.00-6.00   sec   112 MBytes   940 Mbits/sec                  
[  4]   6.00-7.00   sec   112 MBytes   940 Mbits/sec                  
[  4]   7.00-8.00   sec   112 MBytes   940 Mbits/sec                  
[  4]   8.00-9.00   sec   112 MBytes   940 Mbits/sec                  
[  4]   9.00-10.00  sec   112 MBytes   940 Mbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  1.10 GBytes   941 Mbits/sec                  sender
[  4]   0.00-10.00  sec  1.10 GBytes   941 Mbits/sec                  receiver

iperf Done.

bash-3.2# iperf3 -R -c 192.168.83.65
Connecting to host 192.168.83.65, port 5201
Reverse mode, remote host 192.168.83.65 is sending
[  4] local 192.168.83.64 port 60379 connected to 192.168.83.65 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   108 MBytes   902 Mbits/sec                  
[  4]   1.00-2.00   sec   108 MBytes   905 Mbits/sec                  
[  4]   2.00-3.00   sec   108 MBytes   905 Mbits/sec                  
[  4]   3.00-4.00   sec   108 MBytes   905 Mbits/sec                  
[  4]   4.00-5.00   sec   108 MBytes   905 Mbits/sec                  
[  4]   5.00-6.00   sec   108 MBytes   905 Mbits/sec                  
[  4]   6.00-7.00   sec   108 MBytes   905 Mbits/sec                  
[  4]   7.00-8.00   sec   108 MBytes   905 Mbits/sec                  
[  4]   8.00-9.00   sec   108 MBytes   905 Mbits/sec                  
[  4]   9.00-10.00  sec   108 MBytes   905 Mbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  1.06 GBytes   906 Mbits/sec    1             sender
[  4]   0.00-10.00  sec  1.05 GBytes   905 Mbits/sec                  receiver

iperf Done.

 

Full debug output (including dmesg output for USB events): http://ix.io/1jEy

Link to comment
Share on other sites

3 minutes ago, tkaiser said:

No FEL setup here currently. Is there an dev2mem alternative

Surely OK!

 

4 minutes ago, tkaiser said:

When attaching my EVO840 SSD in UAS capable disk enclosures (JMS567 and then ASM1153) I get these messages

Do them appear on other SoCs w/ DWC3?

 

5 minutes ago, tkaiser said:

Read performance isn't bad but write is too low

Maybe I need to check the DRAM controller or bus clocking code?

Link to comment
Share on other sites

3 minutes ago, Icenowy said:

Do them appear on other SoCs w/ DWC3?

Yes, RK3328, RK3399, Exynos 5244. Error messages on RK once the enclosures appear on the bus:

[ 1503.960555] xhci-hcd xhci-hcd.2.auto: ERROR Transfer event for disabled endpoint or incorrect stream ring
[ 1503.960574] xhci-hcd xhci-hcd.2.auto: @00000000b684e310 00000000 00000000 04000000 03038001

 

9 minutes ago, Icenowy said:

Maybe I need to check the DRAM controller or bus clocking code?

 

I wonder whether the tinymembench numbers are ok (especially memcpy) and whether we can compare with Allwinner BSP? 

Link to comment
Share on other sites

16 minutes ago, tkaiser said:

Read performance isn't bad but write is too low

Could you `devmem 0x03001510 w 0x03000002` before testing?

 

(Although I cannot ensure whether this will help...)

Link to comment
Share on other sites

5 minutes ago, Icenowy said:

Could you `devmem 0x03001510 w 0x03000002` before testing?

 

Doesn't do the trick -- still below 80MB/s:

root@pineh64:/mnt/sda# /usr/local/src/devmemX/devmem2 0x03001510 w 0x03000002
root@pineh64:/mnt/sda# iozone -e -I -a -s 1000M -r 16384k -i 0 -i 1 | grep 102400
	File size set to 1024000 kB
         1024000   16384    77356    77853   348247   349195

 

Link to comment
Share on other sites

On 8/1/2018 at 12:58 AM, @lex said:

If you don't mind, i throw some numbers here grabbed from Opi One Plus so we all can compare to PineH64

 

I benchmarked the PineH64 now at 1800 MHz: https://github.com/ThomasKaiser/sbc-bench/commit/2f4dfe6a21db48bc130fc3801b78caf4412bb048

 

Are you also running Allwinner's BSP on H6 boards? Would be pretty interesting to get tinymembench results with BSP DRAM initialization and kernel (see end of the thread)

Link to comment
Share on other sites

1 hour ago, tkaiser said:

Are you also running Allwinner's BSP on H6 boards? Would be pretty interesting to get tinymembench results with BSP DRAM initialization and kernel (see end of the thread)

I am running @Icenowy kernel, 1.5 GHz, 3.10.65 with some minor modification and 4.4.y 4.9.y without ethernet and axp805/6 not recognized .

I will run  tinymembench possibly tomoorow (lack of time)...

Edited by @lex
4.9 - too many kernel in my mind, sorry
Link to comment
Share on other sites

6 hours ago, tkaiser said:

tinymembench results with BSP DRAM initialization and kerne

 

OrangePi One Plus BSP 4.9, kernel 4.9.56 stock for analisys. 

Here is DRAM initialization in use (hope is what you meant):

[271]PMU: AXP806
[275]set pll start
[278]set pll end
[279]rtc[0] value = 0x00000000
[282]rtc[1] value = 0x00000000
[285]rtc[2] value = 0x00000000
[288]rtc[3] value = 0x00000000
[292]rtc[4] value = 0x00000000
[295]rtc[5] value = 0x00000000
[298]DRAM VERSION IS V2_5
[300]PMU:Set DDR Vol 1200mV OK.
[304]PMU:Set DDR Vol 1200mV OK.
[314]BYTE0 GATE ERRO IS = 00000002
[318]BYTE1 GATE ERRO IS = 00000002
[321]BYTE2 GATE ERRO IS = 00000002
[324]BYTE3 GATE ERRO IS = 00000002
[336]DRAM CLK =744 MHZ
[338]DRAM Type =7 (3:DDR3,4:DDR4,6:LPDDR2,7:LPDDR3)
[343]DRAM zq value: 003b3bfb
[350]IPRD=006f006e--PGCR0=00000f5d--PLL=b0003d00
[354]DRAM SIZE =1024 M,para1 = 000030fa,para2 = 04000000
[366]DRAM simple test OK.
[369]dram size =1024

tinymembench, running first time with interactive governor:

Spoiler

 

tinymembench v0.4.9 (simple benchmark for memory throughput and latency)

==========================================================================
== Memory bandwidth tests                                               ==
==                                                                      ==
== Note 1: 1MB = 1000000 bytes                                          ==
== Note 2: Results for 'copy' tests show how many bytes can be          ==
==         copied per second (adding together read and writen           ==
==         bytes would have provided twice higher numbers)              ==
== Note 3: 2-pass copy means that we are using a small temporary buffer ==
==         to first fetch data into it, and only then write it to the   ==
==         destination (source -> L1 cache, L1 cache -> destination)    ==
== Note 4: If sample standard deviation exceeds 0.1%, it is shown in    ==
==         brackets                                                     ==
==========================================================================

 C copy backwards                                     :   1620.8 MB/s (1.4%)
 C copy backwards (32 byte blocks)                    :   1661.3 MB/s (3.3%)
 C copy backwards (64 byte blocks)                    :   1620.9 MB/s (1.3%)
 C copy                                               :   1589.5 MB/s (1.2%)
 C copy prefetched (32 bytes step)                    :   1246.6 MB/s (1.1%)
 C copy prefetched (64 bytes step)                    :   1146.1 MB/s (0.6%)
 C 2-pass copy                                        :   1445.3 MB/s
 C 2-pass copy prefetched (32 bytes step)             :    998.9 MB/s (2.6%)
 C 2-pass copy prefetched (64 bytes step)             :    937.3 MB/s (0.7%)
 C fill                                               :   5560.6 MB/s (1.8%)
 C fill (shuffle within 16 byte blocks)               :   5560.5 MB/s (1.6%)
 C fill (shuffle within 32 byte blocks)               :   5560.5 MB/s (1.9%)
 C fill (shuffle within 64 byte blocks)               :   5560.8 MB/s (1.3%)
 ---
 standard memcpy                                      :   1738.8 MB/s (0.5%)
 standard memset                                      :   5562.1 MB/s (1.8%)
 ---
 NEON LDP/STP copy                                    :   1695.8 MB/s (0.6%)
 NEON LDP/STP copy pldl2strm (32 bytes step)          :   1079.8 MB/s (1.2%)
 NEON LDP/STP copy pldl2strm (64 bytes step)          :   1366.2 MB/s (0.6%)
 NEON LDP/STP copy pldl1keep (32 bytes step)          :   1844.0 MB/s
 NEON LDP/STP copy pldl1keep (64 bytes step)          :   1850.0 MB/s
 NEON LD1/ST1 copy                                    :   1715.4 MB/s (1.0%)
 NEON STP fill                                        :   5381.7 MB/s (0.3%)
 NEON STNP fill                                       :   3371.4 MB/s (1.3%)
 ARM LDP/STP copy                                     :   1698.8 MB/s (0.6%)
 ARM STP fill                                         :   5327.6 MB/s
 ARM STNP fill                                        :   3390.1 MB/s (1.1%)

==========================================================================
== Memory latency test                                                  ==
==                                                                      ==
== Average time is measured for random memory accesses in the buffers   ==
== of different sizes. The larger is the buffer, the more significant   ==
== are relative contributions of TLB, L1/L2 cache misses and SDRAM      ==
== accesses. For extremely large buffer sizes we are expecting to see   ==
== page table walk with several requests to SDRAM for almost every      ==
== memory access (though 64MiB is not nearly large enough to experience ==
== this effect to its fullest).                                         ==
==                                                                      ==
== Note 1: All the numbers are representing extra time, which needs to  ==
==         be added to L1 cache latency. The cycle timings for L1 cache ==
==         latency can be usually found in the processor documentation. ==
== Note 2: Dual random read means that we are simultaneously performing ==
==         two independent memory accesses at a time. In the case if    ==
==         the memory subsystem can't handle multiple outstanding       ==
==         requests, dual random read has the same timings as two       ==
==         single reads performed one after another.                    ==
==========================================================================

block size : single random read / dual random read
      1024 :    0.0 ns          /     0.0 ns 
      2048 :    0.0 ns          /     0.0 ns 
      4096 :    0.0 ns          /     0.0 ns 
      8192 :    0.0 ns          /     0.0 ns 
     16384 :    0.0 ns          /     0.0 ns 
     32768 :    0.0 ns          /     0.0 ns 
     65536 :    3.8 ns          /     6.5 ns 
    131072 :    5.8 ns          /     9.1 ns 
    262144 :    6.9 ns          /    10.2 ns 
    524288 :    7.9 ns          /    11.3 ns 
   1048576 :   76.6 ns          /   118.4 ns 
   2097152 :  114.2 ns          /   153.5 ns 
   4194304 :  137.3 ns          /   169.8 ns 
   8388608 :  149.3 ns          /   177.0 ns 
  16777216 :  155.6 ns          /   181.7 ns 
  33554432 :  160.1 ns          /   184.5 ns 
  67108864 :  163.4 ns          /   186.3 ns 

 

 


CPU:

Spoiler


cpufreq-info 
cpufrequtils 008: cpufreq-info (C) Dominik Brodowski 2004-2009
Report errors and bugs to cpufreq@vger.kernel.org, please.
analyzing CPU 0:
  driver: cpufreq-sunxi
  CPUs which run at the same hardware frequency: 0 1 2 3
  CPUs which need to have their frequency coordinated by software: 0 1 2 3
  maximum transition latency: 2.00 ms.
  hardware limits: 480 MHz - 1.80 GHz
  available frequency steps: 480 MHz, 720 MHz, 816 MHz, 888 MHz, 1.08 GHz, 1.32 GHz, 1.49 GHz, 1.80 GHz
  available cpufreq governors: interactive, conservative, ondemand, userspace, powersave, performance
  current policy: frequency should be within 480 MHz and 1.80 GHz.
                  The governor "interactive" may decide which speed to use
                  within this range.
  current CPU frequency is 480 MHz.
  cpufreq stats: 480 MHz:24.39%, 720 MHz:0.50%, 816 MHz:0.00%, 888 MHz:0.00%, 1.08 GHz:0.00%, 1.32 GHz:0.00%, 1.49 GHz:24.56%, 1.80 GHz:50.54%  (124)
analyzing CPU 1:
  driver: cpufreq-sunxi
  CPUs which run at the same hardware frequency: 0 1 2 3
  CPUs which need to have their frequency coordinated by software: 0 1 2 3
  maximum transition latency: 2.00 ms.
  hardware limits: 480 MHz - 1.80 GHz
  available frequency steps: 480 MHz, 720 MHz, 816 MHz, 888 MHz, 1.08 GHz, 1.32 GHz, 1.49 GHz, 1.80 GHz
  available cpufreq governors: interactive, conservative, ondemand, userspace, powersave, performance
  current policy: frequency should be within 480 MHz and 1.80 GHz.
                  The governor "interactive" may decide which speed to use
                  within this range.
  current CPU frequency is 480 MHz.
  cpufreq stats: 480 MHz:24.39%, 720 MHz:0.50%, 816 MHz:0.00%, 888 MHz:0.00%, 1.08 GHz:0.00%, 1.32 GHz:0.00%, 1.49 GHz:24.56%, 1.80 GHz:50.54%  (124)
analyzing CPU 2:
  driver: cpufreq-sunxi
  CPUs which run at the same hardware frequency: 0 1 2 3
  CPUs which need to have their frequency coordinated by software: 0 1 2 3
  maximum transition latency: 2.00 ms.
  hardware limits: 480 MHz - 1.80 GHz
  available frequency steps: 480 MHz, 720 MHz, 816 MHz, 888 MHz, 1.08 GHz, 1.32 GHz, 1.49 GHz, 1.80 GHz
  available cpufreq governors: interactive, conservative, ondemand, userspace, powersave, performance
  current policy: frequency should be within 480 MHz and 1.80 GHz.
                  The governor "interactive" may decide which speed to use
                  within this range.
  current CPU frequency is 480 MHz.
  cpufreq stats: 480 MHz:24.39%, 720 MHz:0.50%, 816 MHz:0.00%, 888 MHz:0.00%, 1.08 GHz:0.00%, 1.32 GHz:0.00%, 1.49 GHz:24.56%, 1.80 GHz:50.54%  (124)
analyzing CPU 3:
  driver: cpufreq-sunxi
  CPUs which run at the same hardware frequency: 0 1 2 3
  CPUs which need to have their frequency coordinated by software: 0 1 2 3
  maximum transition latency: 2.00 ms.
  hardware limits: 480 MHz - 1.80 GHz
  available frequency steps: 480 MHz, 720 MHz, 816 MHz, 888 MHz, 1.08 GHz, 1.32 GHz, 1.49 GHz, 1.80 GHz
  available cpufreq governors: interactive, conservative, ondemand, userspace, powersave, performance
  current policy: frequency should be within 480 MHz and 1.80 GHz.
                  The governor "interactive" may decide which speed to use
                  within this range.
  current CPU frequency is 480 MHz.
  cpufreq stats: 480 MHz:24.39%, 720 MHz:0.50%, 816 MHz:0.00%, 888 MHz:0.00%, 1.08 GHz:0.00%, 1.32 GHz:0.00%, 1.49 GHz:24.56%, 1.80 GHz:50.54%  (124)

 

8

Running subsequent tinymembench with performance governor:

Spoiler

 

tinymembench v0.4.9 (simple benchmark for memory throughput and latency)

==========================================================================
== Memory bandwidth tests                                               ==
==                                                                      ==
== Note 1: 1MB = 1000000 bytes                                          ==
== Note 2: Results for 'copy' tests show how many bytes can be          ==
==         copied per second (adding together read and writen           ==
==         bytes would have provided twice higher numbers)              ==
== Note 3: 2-pass copy means that we are using a small temporary buffer ==
==         to first fetch data into it, and only then write it to the   ==
==         destination (source -> L1 cache, L1 cache -> destination)    ==
== Note 4: If sample standard deviation exceeds 0.1%, it is shown in    ==
==         brackets                                                     ==
==========================================================================

 C copy backwards                                     :   1615.7 MB/s (1.6%)
 C copy backwards (32 byte blocks)                    :   1649.2 MB/s (1.2%)
 C copy backwards (64 byte blocks)                    :   1628.9 MB/s (1.5%)
 C copy                                               :   1584.9 MB/s (2.0%)
 C copy prefetched (32 bytes step)                    :   1237.4 MB/s (0.5%)
 C copy prefetched (64 bytes step)                    :   1128.7 MB/s
 C 2-pass copy                                        :   1445.7 MB/s (1.1%)
 C 2-pass copy prefetched (32 bytes step)             :    946.5 MB/s
 C 2-pass copy prefetched (64 bytes step)             :    938.1 MB/s (1.3%)
 C fill                                               :   5565.0 MB/s (1.6%)
 C fill (shuffle within 16 byte blocks)               :   5551.9 MB/s (1.6%)
 C fill (shuffle within 32 byte blocks)               :   5565.6 MB/s (1.8%)
 C fill (shuffle within 64 byte blocks)               :   5564.8 MB/s (1.6%)
 ---
 standard memcpy                                      :   1734.8 MB/s (0.4%)
 standard memset                                      :   5330.5 MB/s
 ---
 NEON LDP/STP copy                                    :   1703.5 MB/s (0.3%)
 NEON LDP/STP copy pldl2strm (32 bytes step)          :   1064.5 MB/s (0.9%)
 NEON LDP/STP copy pldl2strm (64 bytes step)          :   1367.2 MB/s (0.4%)
 NEON LDP/STP copy pldl1keep (32 bytes step)          :   1845.4 MB/s
 NEON LDP/STP copy pldl1keep (64 bytes step)          :   1849.3 MB/s
 NEON LD1/ST1 copy                                    :   1678.7 MB/s
 NEON STP fill                                        :   5330.2 MB/s
 NEON STNP fill                                       :   3395.9 MB/s (1.6%)
 ARM LDP/STP copy                                     :   1697.5 MB/s (0.2%)
 ARM STP fill                                         :   5331.2 MB/s
 ARM STNP fill                                        :   3402.0 MB/s (0.7%)

==========================================================================
== Memory latency test                                                  ==
==                                                                      ==
== Average time is measured for random memory accesses in the buffers   ==
== of different sizes. The larger is the buffer, the more significant   ==
== are relative contributions of TLB, L1/L2 cache misses and SDRAM      ==
== accesses. For extremely large buffer sizes we are expecting to see   ==
== page table walk with several requests to SDRAM for almost every      ==
== memory access (though 64MiB is not nearly large enough to experience ==
== this effect to its fullest).                                         ==
==                                                                      ==
== Note 1: All the numbers are representing extra time, which needs to  ==
==         be added to L1 cache latency. The cycle timings for L1 cache ==
==         latency can be usually found in the processor documentation. ==
== Note 2: Dual random read means that we are simultaneously performing ==
==         two independent memory accesses at a time. In the case if    ==
==         the memory subsystem can't handle multiple outstanding       ==
==         requests, dual random read has the same timings as two       ==
==         single reads performed one after another.                    ==
==========================================================================

block size : single random read / dual random read
      1024 :    0.0 ns          /     0.0 ns 
      2048 :    0.0 ns          /     0.0 ns 
      4096 :    0.0 ns          /     0.0 ns 
      8192 :    0.0 ns          /     0.0 ns 
     16384 :    0.0 ns          /     0.0 ns 
     32768 :    0.0 ns          /     0.0 ns 
     65536 :    4.6 ns          /     8.4 ns 
    131072 :    7.0 ns          /    11.6 ns 
    262144 :    8.3 ns          /    12.9 ns 
    524288 :    9.5 ns          /    14.2 ns 
   1048576 :   74.7 ns          /   116.1 ns 
   2097152 :  112.3 ns          /   151.1 ns 
   4194304 :  135.2 ns          /   169.3 ns 
   8388608 :  147.1 ns          /   174.6 ns 
  16777216 :  153.5 ns          /   179.3 ns 
  33554432 :  158.0 ns          /   184.8 ns 
  67108864 :  161.6 ns          /   184.4 ns 
 


 

 

 

CPU:

 

Spoiler

cpufrequtils 008: cpufreq-info (C) Dominik Brodowski 2004-2009
Report errors and bugs to cpufreq@vger.kernel.org, please.
analyzing CPU 0:
  driver: cpufreq-sunxi
  CPUs which run at the same hardware frequency: 0 1 2 3
  CPUs which need to have their frequency coordinated by software: 0 1 2 3
  maximum transition latency: 2.00 ms.
  hardware limits: 480 MHz - 1.80 GHz
  available frequency steps: 480 MHz, 720 MHz, 816 MHz, 888 MHz, 1.08 GHz, 1.32 GHz, 1.49 GHz, 1.80 GHz
  available cpufreq governors: interactive, conservative, ondemand, userspace, powersave, performance
  current policy: frequency should be within 480 MHz and 1.80 GHz.
                  The governor "performance" may decide which speed to use
                  within this range.
  current CPU frequency is 1.80 GHz.
  cpufreq stats: 480 MHz:14.94%, 720 MHz:0.24%, 816 MHz:0.00%, 888 MHz:0.00%, 1.08 GHz:0.00%, 1.32 GHz:0.00%, 1.49 GHz:34.37%, 1.80 GHz:50.45%  (317)
analyzing CPU 1:
  driver: cpufreq-sunxi
  CPUs which run at the same hardware frequency: 0 1 2 3
  CPUs which need to have their frequency coordinated by software: 0 1 2 3
  maximum transition latency: 2.00 ms.
  hardware limits: 480 MHz - 1.80 GHz
  available frequency steps: 480 MHz, 720 MHz, 816 MHz, 888 MHz, 1.08 GHz, 1.32 GHz, 1.49 GHz, 1.80 GHz
  available cpufreq governors: interactive, conservative, ondemand, userspace, powersave, performance
  current policy: frequency should be within 480 MHz and 1.80 GHz.
                  The governor "performance" may decide which speed to use
                  within this range.
  current CPU frequency is 1.80 GHz.
  cpufreq stats: 480 MHz:14.94%, 720 MHz:0.24%, 816 MHz:0.00%, 888 MHz:0.00%, 1.08 GHz:0.00%, 1.32 GHz:0.00%, 1.49 GHz:34.37%, 1.80 GHz:50.45%  (317)
analyzing CPU 2:
  driver: cpufreq-sunxi
  CPUs which run at the same hardware frequency: 0 1 2 3
  CPUs which need to have their frequency coordinated by software: 0 1 2 3
  maximum transition latency: 2.00 ms.
  hardware limits: 480 MHz - 1.80 GHz
  available frequency steps: 480 MHz, 720 MHz, 816 MHz, 888 MHz, 1.08 GHz, 1.32 GHz, 1.49 GHz, 1.80 GHz
  available cpufreq governors: interactive, conservative, ondemand, userspace, powersave, performance
  current policy: frequency should be within 480 MHz and 1.80 GHz.
                  The governor "performance" may decide which speed to use
                  within this range.
  current CPU frequency is 1.80 GHz.
  cpufreq stats: 480 MHz:14.94%, 720 MHz:0.24%, 816 MHz:0.00%, 888 MHz:0.00%, 1.08 GHz:0.00%, 1.32 GHz:0.00%, 1.49 GHz:34.37%, 1.80 GHz:50.45%  (317)
analyzing CPU 3:
  driver: cpufreq-sunxi
  CPUs which run at the same hardware frequency: 0 1 2 3
  CPUs which need to have their frequency coordinated by software: 0 1 2 3
  maximum transition latency: 2.00 ms.
  hardware limits: 480 MHz - 1.80 GHz
  available frequency steps: 480 MHz, 720 MHz, 816 MHz, 888 MHz, 1.08 GHz, 1.32 GHz, 1.49 GHz, 1.80 GHz
  available cpufreq governors: interactive, conservative, ondemand, userspace, powersave, performance
  current policy: frequency should be within 480 MHz and 1.80 GHz.
                  The governor "performance" may decide which speed to use
                  within this range.
  current CPU frequency is 1.80 GHz.
  cpufreq stats: 480 MHz:14.94%, 720 MHz:0.24%, 816 MHz:0.00%, 888 MHz:0.00%, 1.08 GHz:0.00%, 1.32 GHz:0.00%, 1.49 GHz:34.37%, 1.80 GHz:50.45%  (317)

 

Things you should consider in your analysis that could favor some results or not.

 

* axp806 not detected by kernel

[    0.431724] [axp806] chip id not detect 0xff !
[    1.496996] input: axp80x-powerkey as /devices/platform/soc/pmu0/axp80x-powerkey/input/input0
[    4.925539] axp80x_aldo1: unsupportable voltage range: 858992688-3300000uV

* eth not functional

[    0.816207] sun50iw6p1-pinctrl pio: expect_func as:gmac0, but muxsel(5) is func:csi0
[    0.824909] sun50iw6p1-pinctrl pio: expect_func as:gmac0, but muxsel(5) is func:mdc0
[    0.833599] sun50iw6p1-pinctrl pio: expect_func as:gmac0, but muxsel(5) is func:mdio0
[    0.843450] gmac-power2: NULL
[   11.600421] libphy: gmac0: probed

And the usual:

 

Spoiler

[    0.574036] CPU budget cooling register Success
[   98.289887] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  100.337890] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  102.385838] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  104.433890] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  106.481898] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  108.529899] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  110.577898] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  117.745888] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  119.793889] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  121.841895] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  123.889885] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  125.937886] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  127.986068] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  130.034062] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  137.202051] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  139.250071] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  141.298079] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  144.370070] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  146.418081] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  148.466082] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  150.514083] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  152.562097] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  154.610085] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  156.658084] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  158.706065] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  160.753900] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  162.801901] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  164.849907] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  166.898099] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  168.946079] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  170.994065] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  173.042101] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  175.089900] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  177.137908] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  179.185893] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  181.233887] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  185.329886] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  188.401894] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  203.761825] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  205.809824] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  207.858053] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  239.601760] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  241.649752] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  243.697754] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  245.745753] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  247.793763] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  255.985793] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  258.033787] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  260.081787] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  262.129824] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  264.177789] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  266.225863] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  268.273858] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  331.761846] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  468.977880] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  472.049873] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  476.145868] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  478.193888] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  480.241872] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  483.313875] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  493.553898] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  495.601870] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  497.649899] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  499.697881] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  508.914069] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  510.962065] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  513.009903] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  515.058058] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  517.106066] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  519.154107] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  521.202067] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  523.250063] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  525.298065] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  528.370056] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  532.466050] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  538.610051] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  540.657890] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  543.729887] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  608.241767] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  612.337760] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  614.385771] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  616.433760] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  618.481789] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  620.529766] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  622.577768] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  633.841848] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  635.889852] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  637.937852] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  639.985880] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  642.033849] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  645.105825] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  647.153866] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  650.225825] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  652.273857] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  656.369827] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  660.465845] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  662.513851] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  665.585856] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  667.633853] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  669.681850] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  674.801853] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  677.873863] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  682.993825] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  687.089825] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  689.137854] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  691.185853] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  693.233856] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  696.305853] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  699.377849] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  702.449852] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  706.545852] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  709.617854] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  712.689851] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  715.761854] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  721.905853] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  727.025852] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  730.097852] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  732.145879] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  734.193850] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  738.289858] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  741.361855] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  744.433852] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  748.529849] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  751.601860] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  755.697856] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  758.769845] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  760.817844] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  762.865868] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  766.961854] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  769.009846] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  771.057852] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  774.129851] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  778.225880] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  781.297851] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  785.393850] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  792.561827] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  795.633852] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  800.753827] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  802.801853] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  804.849846] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  806.897851] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  810.993850] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  814.065869] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  816.113852] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  819.185825] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  822.257853] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  825.329848] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  827.377854] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  849.905852] CPU Budget:update CPU0 cpufreq max to min 1488000~480000
[  861.169852] CPU Budget:update CPU0 cpufreq max to min 1488000~480000

 

I am pretty sure it runs at 1.8 GHz:

CPU1 freq      : 1800 MHz 
CPU2 freq      : 1800 MHz 
CPU3 freq      : 1800 MHz 
CPU4 freq      : 1800 MHz 
CPU count      : 4  
Governor       : performance  
SOC Temp       : 55 C 

Next time i get results from 3.10...

Link to comment
Share on other sites

8 hours ago, tkaiser said:

Doesn't do the trick

Could you perform another trial, this times on the DRAM initialization code?

 

Comment out all contents in `mctl_set_master_priority()` in u-boot/arch/arm/mach-sunxi/dram_sun50i_h6.c, or just comment out the call to the function.

 

This will disable the QoS of the memory controller.

Link to comment
Share on other sites

ASM2115 SATA 6Gb/s bridge and Samsung 850 PRO 256GB


USB3 port at OrangePi Lite2, 1.5Ghz, 4.17.y http://ix.io/1jHP

iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2

                                                              random    random
              kB  reclen    write  rewrite    read    reread    read     write
          102400       4    14794    15914    15837    15583    11989    15921
          102400      16    28337    29333    29288    30406    26548    29360
          102400     512    40902    41017    40852    41017    40007    41121
          102400    1024    41129    41316    41116    41317    40152    41333
          102400   16384    40305    41491    41212    41382    41319    41384

 

Link to comment
Share on other sites

45 minutes ago, Igor said:

ASM2115 SATA 6Gb/s bridge and Samsung 850 PRO 256GB

 

That's only Hi-Speed but not SuperSpeed:

Bus 007 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 006 Device 004: ID 174c:1153 ASMedia Technology Inc. ASM2115 SATA 6Gb/s bridge
...
/:  Bus 07.Port 1: Dev 1, Class=root_hub, Driver=xhci-hcd/1p, 5000M
/:  Bus 06.Port 1: Dev 1, Class=root_hub, Driver=xhci-hcd/1p, 480M
    |__ Port 1: Dev 4, If 0, Class=Mass Storage, Driver=usb-storage, 480M

But nice to see that we get +41MB/s with UAS Mass Storage protocol and "USB2" (correctly called Hi-Speed)

 

Edit: It's not even UAS that gets negotiated.

Link to comment
Share on other sites

3 hours ago, tkaiser said:

It's not even UAS that gets negotiated.


Yes. It looks like a random issue. I did a few tests on PineH6 and two strange things happened. First, ASM2115 works in fast mode but it doesn't complete the test. It hangs somewhere in the middle. Next test is with 152d:1562 JMicron Technology Corp. and M2 128GB Transcend MTS:

PineH6, 4.18.y
 

Spoiler

                                                              random    random
              kB  reclen    write  rewrite    read    reread    read     write
          102400       4     6075     6117     9261     6474     6144     6127
          102400      16    24446    24446    24505    24573    24573    24499
          102400     512    70622    70779   254529   262009   261561    70847
          102400    1024    73886    73882   301171   311164   262617    74115
          102400   16384    46165    63574   334690   346206   301046    77123

 

Reference: Intel Xeon 2GHz, 4.9.y via very good HUB

Spoiler

                                                              random    random
              kB  reclen    write  rewrite    read    reread    read     write
          102400       4    42354    43461    43895    44220    20904    43265
          102400      16   117924   147288   169746   170712    54493   114120
          102400     512   152368   152573   370516   386340   319417   153756
          102400    1024   151163   152634   380830   400042   328419   153147
          102400   16384   150013   151540   393300   413824   345363   152646

 

 


Edit: All images for H6 boards have been updated!

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines