Jump to content

Da Xue

Members
  • Posts

    100
  • Joined

  • Last visited

Posts posted by Da Xue

  1. On 2/23/2018 at 4:57 PM, HAm said:

    I am running Armbian 5.38 LePotato Debian Stretch next 4.14.14 and noticing some weird activities going on.

    When checking the syslog it was full of this stuff:

     

    Feb 23 06:25:29 localhost systemd[1]: Stopped Serial Getty on ttyS0.
    Feb 23 06:25:29 localhost systemd[1]: Started Serial Getty on ttyS0.
    Feb 23 06:25:39 localhost systemd[1]: serial-getty@ttyS0.service: Service hold-o
    ff time over, scheduling restart.
    Feb 23 06:25:39 localhost systemd[1]: Stopped Serial Getty on ttyS0.
    Feb 23 06:25:39 localhost systemd[1]: Started Serial Getty on ttyS0.
    Feb 23 06:25:50 localhost systemd[1]: serial-getty@ttyS0.service: Service hold-o
    ff time over, scheduling restart.
     

    and this goes on forever.... filling the syslog with megabytes of text.

    What is going on and how to stop it?

    As far as I know there is no gadget attached to ttys0, I only have a keyboard, a DVB-T USB stick, CAT-5 ethernet cable and HDMI attached to the LePotato.

    The DVB-T stick is used as an ADSB receiver on Dump1090-mutabilitu and fr24feed is used to upload decoded positions of airplanes to Flightradar24.com

     

    Help?

    I haven't played with Armbian but ttyS0 is only present on Amlogic's kernel. On mainline kernel, it should be ttyAML0. It could be that there's no such device and the tty service is restarting due to it.

  2. On 2/13/2018 at 4:45 AM, Tido said:

    I can confirm the green = Link is dead.  Ethernet 10/100M (ETH MDI diff x 2)

     

    Ethernet-Port on the schematic is on sheet 7,

    GPIOZ_14 (SPDIF_IN // ETH_LINK_LED)  -  P20 -
    GPIOZ_15 (PWM_C // ETH_ACT_LED)  -  P19 -

     

    not really, I think they forgot to connect the green LED :rolleyes:,  the part 5J1 is the Single Port Connector with Magnetics Module and LED and refers to:

    HY911105H  https://aw-som.com/product_info.php?products_id=54

    HR911105A  http://www.datasheetmeta.com/search.php?q=HR911105A

     

    @Da Xue, can you check whether the green LED (pin 9+10) is only missing in the schematics or did you forget to connect it ?

     

     

    They weren't connected due to an placement issue we had. This will be addressed in the next manufacturing batch.

  3. @Tido, Armbian is way better at this than we are. Right now apt-get update only updates the debian packages and not the kernel, u-boot, and any other image fixes. If you are using standard Ubuntu, Armbian is the way to go.

     

    We have purchased a domain which we will package some u-boots and kernels but that won't happen until a non-preview release. Hence, why our images are "preview" releases rather formal releases. We anticipate the formal releases to happen around the time 18.04 comes out. Our images will have a different target than Armbian since we will be including some specific support for ecosystem components like FPGAs and HATs and our target markets. Armbian will be the more generic distro for these boards.

  4. 25 minutes ago, TonyMac32 said:

     

    Thank you for the update, I saw the activity in the lists, I should be able to try the patches out soon to see how they work.  For the short term I've been using a wifi dongle.  

    Awesome, thanks! I still haven't decided if they should do a gigabit allwinner board called Helion.  The other thing is a S905X IoT/SOM board with LPDDR3.

  5. From 91c030a615bc1bcc500cfd63d19ea5a61179f5e1 Mon Sep 17 00:00:00 2001
    From: Yizhou Jiang <yizhou.jiang@amlogic.com>
    Date: Tue, 27 Jun 2017 14:05:12 +0800
    Subject: [PATCH] PD#146205: eth: fix eth stop
    
    Change-Id: I7f8ad51dacd207a804377e71340fe15f547bbae0
    Signed-off-by: Yizhou Jiang <yizhou.jiang@amlogic.com>
    ---
     drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 9 +++++++++
     1 file changed, 9 insertions(+)
    
    diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
    index 0fe9ed86aa3f..4d58ed6d44fa 100644
    --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
    +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
    @@ -3390,6 +3390,15 @@ static void moniter_tx_handler(struct work_struct *work)
     		priv = netdev_priv(c_phy_dev->attached_dev);
     		if (priv) {
     			if (c_phy_dev->link) {
    +				if (priv->dev->stats.tx_packets > 100) {
    +					if (priv->dev->stats.rx_packets == 0) {
    +						pr_info("rx stop, recover eth\n");
    +						stmmac_release(priv->dev);
    +						stmmac_open(priv->dev);
    +					}
    +				}
    +				priv->dev->stats.tx_packets++;
    +				priv->dev->stats.tx_packets++;
     				if (priv->dirty_tx != priv->cur_tx &&
     						check_tx == 0) {
     					pr_info("tx queueing\n");
    -- 
    2.13.6

    This is a patch to the stmmac as a workaround to the issue for now. Hopefully a better fix is coming. @Andro

  6. So it would appear that the S905X is running somewhere around 1210MHz with the default bl30.bin. With the modified one, it operates around 1475MHz? I think HardKernel has the source for the S905 trusted firmware and I do not. I'm not quite sure how the throttling logic works with 4 cores or if there's a current throttler.

     

    @tkaiser do you have the single thread results?

  7. Here's the bizarre part. When I run it with the default bl30.bin (suppose to be 1512MHz), I get the following:

    sysbench 0.4.12:  multi-threaded system evaluation benchmark
    
    Running the test with following options:
    Number of threads: 4
    
    Doing CPU performance benchmark
    
    Threads started!
    Done.
    
    Maximum prime number checked in CPU test: 20000
    
    
    Test execution summary:
        total time:                          7.6267s
        total number of events:              10000
        total time taken by event execution: 30.4950
        per-request statistics:
             min:                                  3.05ms
             avg:                                  3.05ms
             max:                                  3.15ms
             approx.  95 percentile:               3.05ms
    
    Threads fairness:
        events (avg/stddev):           2500.0000/0.00
        execution time (avg/stddev):   7.6237/0.00
    

    Do you have any results I can compare with for other boards?

  8. The scaling_available_frequencies are bogus and is hard coded in the trusted firmware so I have no visibility to the exact speed. It always reports 1512. I am not getting this node: /sys/devices/system/cpu/cpu0/cpufreq/stats/time_in_state. Do I have to enable a module? I am running numbers now for 1584/1056.

  9. No matter what I change the clock speed tables to in uboot to populate the PSCI, it will always say 1512MHz in the sysfs frequencies. I think the A53 cores with crypto extensions perform slightly slower than their non-crypto counterparts in generic workloads.

     

    7 minutes ago, tkaiser said:

     

    And according to sysbench it's even slightly lower (more like 1470 MHz but that is pretty close to the reported 1512 MHz). In other words: too early to do any benchmarking now :) 

     

    Mind you that I am running this at 1680MHz and not 1512MHz for those numbers. The stock numbers are slower but not proportionally. The gains from anything over 1584MHz are very small if not negative.

  10. Just now, tkaiser said:

     

    Thank you! Just to interpret the sysbench numbers: Which distro do you use, I need output from 'lsb_release -c' and 'file /usr/bin/sysbench'.

    ubuntu xenial

    /usr/bin/sysbench: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-aarch64.so.1, for GNU/Linux 3.7.0, BuildID[sha1]=01b3ec2b7f6a203ed
     

    This is just a boostrapped ubuntu image with the kernel on github.

  11. I don't have armbian monitor but heres the results with the CPU set to 1680MHz and the DDR set to 2108MHz. The DDR can go past 2200MHz but I haven't tested the performance because it seemed that the preconfigured timing in uboot negatively affects performance at higher speeds. I'll run the test at stock tomorrow or the day after.

    7-Zip 9.20  Copyright (c) 1999-2010 Igor Pavlov  2010-11-18
    p7zip Version 9.20 (locale=C,Utf16=off,HugeFiles=on,4 CPUs)
    
    RAM size:    1852 MB,  # CPU hardware threads:   4
    RAM usage:    850 MB,  # Benchmark threads:      4
    
    Dict        Compressing          |        Decompressing
          Speed Usage    R/U Rating  |    Speed Usage    R/U Rating
           KB/s     %   MIPS   MIPS  |     KB/s     %   MIPS   MIPS
    
    22:    2019   288    682   1964  |    53850   399   1218   4858
    23:    2016   289    709   2054  |    51161   385   1217   4681
    24:    2009   290    744   2160  |    52508   399   1219   4871
    25:    2002   290    788   2286  |    52036   400   1223   4893
    ----------------------------------------------------------------
    Avr:          289    731   2116               396   1219   4826
    Tot:          342    975   3471
    sysbench 0.4.12:  multi-threaded system evaluation benchmark
    
    Running the test with following options:
    Number of threads: 4
    
    Doing CPU performance benchmark
    
    Threads started!
    Done.
    
    Maximum prime number checked in CPU test: 20000
    
    
    Test execution summary:
        total time:                          6.2510s
        total number of events:              10000
        total time taken by event execution: 24.9940
        per-request statistics:
             min:                                  2.50ms
             avg:                                  2.50ms
             max:                                  2.61ms
             approx.  95 percentile:               2.50ms
    
    Threads fairness:
        events (avg/stddev):           2500.0000/0.71
        execution time (avg/stddev):   6.2485/0.00
    
    sysbench 0.4.12:  multi-threaded system evaluation benchmark
    
    Running the test with following options:
    Number of threads: 2
    
    Doing CPU performance benchmark
    
    Threads started!
    Done.
    
    Maximum prime number checked in CPU test: 20000
    
    
    Test execution summary:
        total time:                          12.5001s
        total number of events:              10000
        total time taken by event execution: 24.9964
        per-request statistics:
             min:                                  2.50ms
             avg:                                  2.50ms
             max:                                  2.60ms
             approx.  95 percentile:               2.50ms
    
    Threads fairness:
        events (avg/stddev):           5000.0000/1.00
        execution time (avg/stddev):   12.4982/0.00
    sysbench 0.4.12:  multi-threaded system evaluation benchmark
    
    Running the test with following options:
    Number of threads: 1
    
    Doing CPU performance benchmark
    
    Threads started!
    Done.
    
    Maximum prime number checked in CPU test: 20000
    
    
    Test execution summary:
        total time:                          25.0038s
        total number of events:              10000
        total time taken by event execution: 25.0010
        per-request statistics:
             min:                                  2.50ms
             avg:                                  2.50ms
             max:                                  2.56ms
             approx.  95 percentile:               2.50ms
    
    Threads fairness:
        events (avg/stddev):           10000.0000/0.00
        execution time (avg/stddev):   25.0010/0.00
    tinymembench v0.4.9 (simple benchmark for memory throughput and latency)
    
    ==========================================================================
    == Memory bandwidth tests                                               ==
    ==                                                                      ==
    == Note 1: 1MB = 1000000 bytes                                          ==
    == Note 2: Results for 'copy' tests show how many bytes can be          ==
    ==         copied per second (adding together read and writen           ==
    ==         bytes would have provided twice higher numbers)              ==
    == Note 3: 2-pass copy means that we are using a small temporary buffer ==
    ==         to first fetch data into it, and only then write it to the   ==
    ==         destination (source -> L1 cache, L1 cache -> destination)    ==
    == Note 4: If sample standard deviation exceeds 0.1%, it is shown in    ==
    ==         brackets                                                     ==
    ==========================================================================
    
     C copy backwards                                     :   1941.6 MB/s (1.4%)
     C copy backwards (32 byte blocks)                    :   1944.8 MB/s (1.6%)
     C copy backwards (64 byte blocks)                    :   1915.5 MB/s (1.6%)
     C copy                                               :   1951.1 MB/s (1.5%)
     C copy prefetched (32 bytes step)                    :   1514.9 MB/s (0.3%)
     C copy prefetched (64 bytes step)                    :   1629.5 MB/s
     C 2-pass copy                                        :   1766.9 MB/s
     C 2-pass copy prefetched (32 bytes step)             :   1247.8 MB/s
     C 2-pass copy prefetched (64 bytes step)             :   1258.8 MB/s (0.2%)
     C fill                                               :   6068.0 MB/s
     C fill (shuffle within 16 byte blocks)               :   6068.4 MB/s
     C fill (shuffle within 32 byte blocks)               :   6068.3 MB/s
     C fill (shuffle within 64 byte blocks)               :   6068.5 MB/s
     ---
     standard memcpy                                      :   2026.4 MB/s (0.4%)
     standard memset                                      :   6072.0 MB/s
     ---
     NEON LDP/STP copy                                    :   2016.4 MB/s
     NEON LDP/STP copy pldl2strm (32 bytes step)          :   1365.3 MB/s (0.5%)
     NEON LDP/STP copy pldl2strm (64 bytes step)          :   1805.1 MB/s (0.3%)
     NEON LDP/STP copy pldl1keep (32 bytes step)          :   2388.2 MB/s
     NEON LDP/STP copy pldl1keep (64 bytes step)          :   2385.8 MB/s
     NEON LD1/ST1 copy                                    :   2003.2 MB/s (1.5%)
     NEON STP fill                                        :   6072.4 MB/s
     NEON STNP fill                                       :   6015.9 MB/s
     ARM LDP/STP copy                                     :   2020.0 MB/s (0.2%)
     ARM STP fill                                         :   6072.4 MB/s
     ARM STNP fill                                        :   6015.9 MB/s
    
    ==========================================================================
    == Memory latency test                                                  ==
    ==                                                                      ==
    == Average time is measured for random memory accesses in the buffers   ==
    == of different sizes. The larger is the buffer, the more significant   ==
    == are relative contributions of TLB, L1/L2 cache misses and SDRAM      ==
    == accesses. For extremely large buffer sizes we are expecting to see   ==
    == page table walk with several requests to SDRAM for almost every      ==
    == memory access (though 64MiB is not nearly large enough to experience ==
    == this effect to its fullest).                                         ==
    ==                                                                      ==
    == Note 1: All the numbers are representing extra time, which needs to  ==
    ==         be added to L1 cache latency. The cycle timings for L1 cache ==
    ==         latency can be usually found in the processor documentation. ==
    == Note 2: Dual random read means that we are simultaneously performing ==
    ==         two independent memory accesses at a time. In the case if    ==
    ==         the memory subsystem can't handle multiple outstanding       ==
    ==         requests, dual random read has the same timings as two       ==
    ==         single reads performed one after another.                    ==
    ==========================================================================
    
    block size : single random read / dual random read, [MADV_NOHUGEPAGE]
          1024 :    0.0 ns          /     0.0 ns 
          2048 :    0.0 ns          /     0.0 ns 
          4096 :    0.0 ns          /     0.0 ns 
          8192 :    0.0 ns          /     0.0 ns 
         16384 :    0.0 ns          /     0.0 ns 
         32768 :    0.0 ns          /     0.0 ns 
         65536 :    4.0 ns          /     7.2 ns 
        131072 :    6.1 ns          /    10.5 ns 
        262144 :    7.6 ns          /    12.5 ns 
        524288 :   69.2 ns          /   118.9 ns 
       1048576 :   95.9 ns          /   127.8 ns 
       2097152 :  108.5 ns          /   160.0 ns 
       4194304 :  143.3 ns          /   179.9 ns 
       8388608 :  164.1 ns          /   190.8 ns 
      16777216 :  173.4 ns          /   196.3 ns 
      33554432 :  182.1 ns          /   199.9 ns 
      67108864 :  190.7 ns          /   215.9 ns 
    
    block size : single random read / dual random read, [MADV_HUGEPAGE]
          1024 :    0.0 ns          /     0.0 ns 
          2048 :    0.0 ns          /     0.0 ns 
          4096 :    0.0 ns          /     0.0 ns 
          8192 :    0.0 ns          /     0.0 ns 
         16384 :    0.0 ns          /     0.0 ns 
         32768 :    0.0 ns          /     0.0 ns 
         65536 :    4.0 ns          /     7.0 ns 
        131072 :    6.1 ns          /    10.4 ns 
        262144 :    7.7 ns          /    12.7 ns 
        524288 :   68.6 ns          /   105.3 ns 
       1048576 :   95.9 ns          /   127.8 ns 
       2097152 :  108.0 ns          /   159.6 ns 
       4194304 :  111.1 ns          /   131.2 ns 
       8388608 :  113.7 ns          /   138.4 ns 
      16777216 :  115.1 ns          /   149.5 ns 
      33554432 :  115.4 ns          /   145.4 ns 
      67108864 :  115.7 ns          /   149.9 ns 

     

  12. @tkaiser The default bl30.bin permits the s905x to run at 1512MHz vs 1536MHz for s905. I have a bl30.bin that can be configured up to 1680MHz but the performance isn't linear due to throttling or some other logic in the hardware/firmware. With the hacked pre-1.0 SCPI that Amlogic implemented, I really don't know how to monitor clock speeds to ensure that it doesn't throttle down. I have no experience in reliably monitoring clock speeds in software for ARM so if you know a way, please let me know and I will run the numbers for you.

×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines