3 3
monti

Problems with Gigabit

Recommended Posts

Hi,

 

we bought several Lime2 and tested them.

Unfortunatelly, the GBit-Lan did not seem to work reliyable. Later ping tests revealed a packet loss

on many boards of 1% to 20%. (eg. "ping -s 9000 -i 0.0001 -c 5000 fritz.box"). Some boards dont have

any problems, other more. On heavy access with SSD it's increasing a little. It's happens also without

SSD. Other boards worked fine at the same power and network cable.

 

Its possible thant this strange TX_DELAY isn't set correctly? Are there other problems known?

 

I used the wheezy image with kernel 3.4.105-lime2 and 3.4.107 from you latest wheezy image.

The image with Kernal 4.0.4 stopped after "restarting sshd", so I couldnt test them.

 

Any thoughts? Can I adjust the TX-Delay timing without recompiling?

 

Thanks,

 

Jan

 

 

Share this post


Link to post
Share on other sites
Any thoughts? Can I adjust the TX-Delay timing without recompiling?

 

You are probably referring to this:

 

 

Gbit ethernet requires a delay between the lines, which you can implement by adding 10cm or so extra trace on the PCB or tell the MAC or PHY to do it in software. Very board specific, but so far I've only seen Intel reference designs use the extra trace length method most arm/ppc/mips gbit boards use a PHY that handles it in hardware on request like the GMAC is doing above.

 

http://lists.denx.de/pipermail/u-boot/2014-September/190239.html

 

This is done in uboot so you need to experiment with this parameter:

CONFIG_GMAC_TX_DELAY= (try with numbers from 1-10)

 

File is here:

 

u-boot-sources/configs/A20-OLinuXino-Lime2_defconfig

 

I don't know for any better way than this:

 

1. compile - DD to SD card - reboot

2. test

3. change parameter and try again

Share this post


Link to post
Share on other sites

BTW: Since I want to play around with both TX and RX delay parameters I started to modify Igor's build scripts for this purpose. Since the build system creates a .deb package for u-boot that overwrites the SPL/u-boot on SD card my idea was to let create all 64 variants of possible TX/RX delay values and then let a script automatically install the different u-boot variants, reboot afterwards, test network performance using ping/iperf, tries the next u-boot.deb until all combinations are tested.

 
Currently I got the first step finished: Automatically creating 8 different u-boot .deb packages by adjusting Igor's script.
 
1) Add 
GMAC_DELAY_TEST="yes"
 to compile.sh
 
2) Modify one line in lib/common.sh. Exchange 
CHOOSEN_UBOOT="linux-u-boot-$VER-"$BOARD"_"$REVISION"_armhf"
with 
CHOOSEN_UBOOT="linux-u-boot-${VER}-${BOARD}_${REVISION}_${TX}_${RX}_armhf"
3) Add the following lines in lib/main.sh after "grab_kernel_version"
# check whether we should just build a bunch of u-boot versions to
# brute-force all available GMAC TX/RX delay variations.
if [ "X${GMAC_DELAY_TEST}" == "Xyes" ]; then
        for TX in 0 1 2 3 4 5 6 7 ; do
                for RX in 0 ; do
                        CHOOSEN_UBOOT="linux-u-boot-${VER}-${BOARD}_${REVISION}_${TX}_${RX}_armhf"
                        UBOOT_PCK="linux-u-boot-$VER-"$BOARD
                        # search defconfig file for $BOARD
                        Defconfig="$(grep -i -- "-${BOARD}.dtb" ${DEST}/u-boot/configs/*_defconfig | cut -d: -f1)"
                        if [ ! -f "${Defconfig}" ]; then
                                case ${BOARD} in
                                        aw-som-a20)
                                                Defconfig="${DEST}/u-boot/configs/Awsom_defconfig"
                                                ;;
                                        cubieboard2)
                                                Defconfig="${DEST}/u-boot/configs/Cubieboard2_defconfig"
                                                ;;
                                        lime)
                                                Defconfig="${DEST}/u-boot/configs/A20-OLinuXino-Lime_defconfig"
                                                ;;
                                        micro)
                                                Defconfig="${DEST}/u-boot/configs/A20-OLinuXino_MICRO_defconfig"
                                                ;;
                                        bananapipro)
                                                Defconfig="${DEST}/u-boot/configs/Bananapro_defconfig"
                                                ;;
                                        udoo*)
                                                Defconfig="${DEST}/u-boot/configs/udoo_quad_defconfig"
                                                ;;
                                esac
                        fi

                        # patch defconfig with appropriate tx/rx values
                        MyTmpFile="$(mktemp /tmp/gmac_test.XXXXXX || exit 1)"
                        trap "cd /tmp; rm -f \"${MyTmpFile}\"; exit 0" 0 1 2 3 15
                        grep -v CONFIG_GMAC_TX_DELAY "${Defconfig}" | grep -v CONFIG_GMAC_RX_DELAY >"${MyTmpFile}"
                        cat "${MyTmpFile}" >"${Defconfig}"
                        echo -e "CONFIG_GMAC_TX_DELAY=${TX}\nCONFIG_GMAC_RX_DELAY=${RX}" >>"${Defconfig}"

                        compile_uboot
                done
        done
        exit 0
fi

Step 2) and 3) as patch: http://pastebin.com/RC5WptYB

 

After execution of compile.sh you will end up with 8 different u-boot.debs in output/output/u-boot containing different GMAC TX DELAY settings that can be installed using "dpkg -i".  Execution will stop afterwards.
 
Next step is to patch gmac.c so that different definitions of "CONFIG_GMAC_RX_DELAY" might lead to different results.

Share this post


Link to post
Share on other sites

Thank you so far, I'm trying

My virtual ubuntu is still downloading & compiling the unmodified sources ...

Share this post


Link to post
Share on other sites

Well, I doubt that TX DELAY is related to the problems you experience (these delay settings should only matter when you compare different boards or board revisions).

 

But anyway. Now I have a patchset to create 64 u-boot variants with all possible TX/RX delay variations and will give them later a try (with Lamobo-R1 first, will try the very same stuff with my Lime2 if I suceed with testing and this brute-force approach shows good results). I just added two more lines to gmac.c and hope that they will work:

diff --git a/board/sunxi/gmac.c b/board/sunxi/gmac.c
index 8849132..1bce3ce 100644
--- a/board/sunxi/gmac.c
+++ b/board/sunxi/gmac.c
@@ -26,6 +26,8 @@ int sunxi_gmac_initialize(bd_t *bis)
                CCM_GMAC_CTRL_GPIT_RGMII);
        setbits_le32(&ccm->gmac_clk_cfg,
                     CCM_GMAC_CTRL_TX_CLK_DELAY(CONFIG_GMAC_TX_DELAY));
+        setbits_le32(&ccm->gmac_clk_cfg,
+                     CCM_GMAC_CTRL_RX_CLK_DELAY(CONFIG_GMAC_RX_DELAY));
 #else
        setbits_le32(&ccm->gmac_clk_cfg, CCM_GMAC_CTRL_TX_CLK_SRC_MII |
                CCM_GMAC_CTRL_GPIT_MII);

The diff for modifications of Igor's scripts are here: http://pastebin.com/ZMF89Y57 and you still would need to define 'GMAC_DELAY_TEST="yes"' in compile.sh

Share this post


Link to post
Share on other sites

So, step 1 finished:

 

Compiled, installed and it boots at the first attempt, very nice script!

   Some questions appear during compiling: Preclaim OSS device numbers (SOUND_OSS_CORE_PRECLAIM) ...

   And a final error, which can be ignored: /home/jan/lime2/lib/common.sh: Zeile 385: *4096/1024: Syntax Fehler: Operator erwartet. (Fehlerverursachendes Zeichen ist »*4096/1024«).

 

Without any changes to XY-DELAY:

 

root@lime2:~# ping -s 9000 -f fritz.box                                                                                                                               
PING fritz.box (192.168.178.1) 9000(9028) bytes of data.
..........................................................................................................................................................................................................................^C
--- fritz.box ping statistics ---
1213 packets transmitted, 988 received, 18% packet loss, time 6660ms
rtt min/avg/max/mdev = 2.883/3.338/12.678/0.810 ms, ipg/ewma 5.495/3.247 ms
 

Do you have also such error rates? Without measurement, they are not really recognicable at a first glance, because of the good tcp-stack.

I can't believe, that only our lime2s are problematic, if 12 of our 20 shows droppes packet and 3 have major problems (which occurres also to the user),

some have only minor missing packets (between 0 and 1%).  We checked diverse power supplies, different routers, cables. It works only if we put the lime2

on a 100MBit-Switch.

Share this post


Link to post
Share on other sites

Sorry, no time to test (been busy in the kitchen). But if you want to give different TX/RX delay parameters a try you could play with the 64 different u-boot packages my script created: http://kaiser-edv.de/tmp/lime2-u-boot.tgz

 

Contains 64 debs with the following name scheme: linux-u-boot-lime2_1.9_$TX_$RX_armhf.deb. To use an unmodified RX delay and eg. 4 as TX delay simply install

dpkg -i linux-u-boot-lime2_1.9_4_0_armhf.deb

(I completely rely on Igor's compile_uboot function and no testing has been done regarding RX delay! Be warned: you might end up with a corrupted bootloader!). JFR: Another u-boot patch was necessary:

diff --git a/board/sunxi/Kconfig b/board/sunxi/Kconfig
index 2fcab60..4623de6 100644
--- a/board/sunxi/Kconfig
+++ b/board/sunxi/Kconfig
@@ -451,4 +451,10 @@ config GMAC_TX_DELAY
        ---help---
        Set the GMAC Transmit Clock Delay Chain value.

+config GMAC_RX_DELAY
+        int "GMAC Receive Clock Delay Chain"
+        default 0
+        ---help---
+        Set the GMAC Reveice Clock Delay Chain value.
+
 endif

Both u-boot patches combined for GMAC RX DELAY: http://pastebin.com/adiWjzya

 
Regarding the problems you experience: Do the Lime2 showing problems have a different board revision? I can not remember having packet losses with my Lime2. Unfortunately I cannot test immediately since I have to prefer the Lamobo-R1 that is dedicated to a customer and shows really bad performance right now. Maybe on thursday I'll have a look.

Share this post


Link to post
Share on other sites

Ok, I tried all 64 combinations using the 64 created u-boot debian packages with different TX/RX delay settings. Results of a single iperf TX run to a directly connected MacBook Pro here: http://pastebin.com/xqJN5Kpp(Igor's default settings with eth0 IRQs dedicated to cpu1 -- no further tuning applied).

 

What seems to be obvious: When TX delay is set to 0, 1 or 2 then no network connection can be established at all. The results measured might depend on other stuff like load and do not show any real difference regarding different TX or RX delay settings (one probable exception: RX delay set to 5). When running the tests iperf utilized one single CPU core to 100% so there might be the chance that different TX/RX delay settings make a real difference but you won't measure this due to driver problems:

root@lamobo:~# time iperf -c 192.168.83.75 -t 30                                                                      
------------------------------------------------------------
Client connecting to 192.168.83.75, TCP port 5001
TCP window size: 43.8 KByte (default)
------------------------------------------------------------
[  3] local 192.168.83.82 port 45384 connected with 192.168.83.75 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-30.0 sec  1.18 GBytes   336 Mbits/sec

real    0m30.067s
user    0m0.150s
sys     0m28.850s

(nearly all time spent in sys). The overall CPU utilisation when running the test was around 130%-150% so there's not that much room for improvement. Will try next with an Olimex Lime2 and do in the meantime some iperf tests in the other direction with TX delay 4 and different RX delays.

 

BTW: clock speeds matter. I used the default settings (maximum operating point in 4.0.4 960 MHz per default) and 'verified' using 

sysbench --test=cpu --cpu-max-prime=5000 run --num-threads=2

Since execution time was around 54.5 secs the kernel might clocked up to 960 MHz after a short period of time (on Kernel 3.4 with 912 MHz and CPU governor performance the very same test finishes in 55-56 secs)

Share this post


Link to post
Share on other sites

Next observation: Without adjusting the cpufreq stuff measuring anything related to performance is just crap. The default settings with 4.x scale the CPU frequency between 144 and 960 MHz and ondemand governor. So if you start a test when the CPU has been idle for a while you will get completely different results since the CPU will stay a few seconds in its lowest speed state before clocking up:

root@lamobo:/sys/devices/system/cpu/cpu0/cpufreq# cat stats/time_in_state 
144000 22814
312000 1807
528000 612
720000 784
864000 302
912000 76
960000 21377

Without something like 

echo performance >/sys/devices/system/cpu/cpu0/cpufreq/scaling_governor

applied prior to testing you can expect random results. So I put that now into /etc/rc.local for the tests (since they showed exactly what's to be expected: First iperf run after board being idle: 100 MBits/sec less compared to the consequent tests when the CPU has been clocked with 960 MHz).

Share this post


Link to post
Share on other sites

Testing done with Lamobo R1 since it's useless. I tried a few different combinations of RX delays (all with TX_DELAY 4):

RX 0: 458 Mbits/sec, 25% packet loss
RX 1: 458 Mbits/sec, 25% packet loss
RX 2: 456 Mbits/sec, 25% packet loss
RX 4: 457 Mbits/sec, 25% packet loss
RX 6: 455 Mbits/sec, 25% packet loss

The results are identical and the obvious reason is that iperf is CPU bound and one core always spent 100% utilisation on the iperf server thread in question. So while different RX delay settings still might make a difference they won't show any practical difference due to the CPU being the bottleneck (or the stuff I patched does not work :) ).

 

Time to test again with a Banana Pi or Lime2 which use a somewhat different driver framework.

 

I always expierenced 25% packet loss when pinging the directly connected MacBook using

ping -s 9000 -i 0.0001 -c 5000 macbook.local

BTW: This is the test script I used. Set up prerequisits like in the comment outlined and then call it from /etc/rc.local. Will exchange u-boot 64 times, reboots afterwards and tests with the newly applied settings:

root@lamobo:~# cat /usr/local/bin/gmac-delay-test.sh 
#!/bin/bash
#
# gmac-delay-test.sh
#
# to revert:
# rm /root/stop ; echo -n 0 >/root/rx ; echo -n 0 >/root/tx ; dpkg -i /root/uboot/linux-u-boot-lamobo-r1_3.1_0_0_armhf.deb ; shutdown -r now


if [ -f /root/stop ]; then
	exit 0
fi

Main() {
	read rx </root/rx
	read tx </root/tx
	
	echo "testing tx: ${tx}, rx: ${rx}"

	if [ $rx -eq 8 -a $tx -eq 0 ]; then
		touch /root/stop
		exit 0
	else
		DoTest
		if [ ${tx} -eq 7 ]; then
			tx=0
			rx=$(( ${rx} + 1 ))
		else
			tx=$(( ${tx} + 1 ))
		fi
		echo -n ${tx} >/root/tx
		echo -n ${rx} >/root/rx
		dpkg -i /root/uboot/linux-u-boot-lamobo-r1_3.1_${tx}_${rx}_armhf.deb
		sync
		shutdown -r now
	fi
} # Main

DoTest() {
	Testlog=/root/u-boot-test.txt
	ping -c 1 -W 1 MacBook.local >/dev/null 2>/dev/null
	case $? in
		0)
			MbitTX=$(iperf -c 192.168.83.75 -t 30 | awk -F" " '/Mbits/ {print $7}')
			echo -e "$(date)\t$tx\t$rx\t${MbitTX}" >>"${Testlog}"
			;;
		*)
			echo -e "$(date)\t$tx\t$rx\t-" >>"${Testlog}"
			;;
	esac
} # DoTest
	
Main

Share this post


Link to post
Share on other sites

Hello tkaiser,

 

so, I used your script, it worked perfect and I could test it automatically :)

 

cmd was ping -q -i 0.001 -s 9000 -c 5000 fritz.box > r-$a.txt 2>&1

  

r-TXDELAY-RX-DELAY.txt

r-0-0.txt:5000 packets transmitted, 3479 received, 30% packet loss, time 66099ms
r-0-1.txt:5000 packets transmitted, 580 received, 88% packet loss, time 645610ms
r-1-0.txt:5000 packets transmitted, 2892 received, 42% packet loss, time 148994ms
r-1-1.txt:5000 packets transmitted, 647 received, 87% packet loss, time 720073ms
r-1-2.txt:5000 packets transmitted, 0 received, 100% packet loss, time 52060ms
r-2-0.txt:5000 packets transmitted, 3540 received, 29% packet loss, time 193752ms
r-2-1.txt:5000 packets transmitted, 746 received, 85% packet loss, time 846836ms
r-3-0.txt:5000 packets transmitted, 3276 received, 34% packet loss, time 57450ms
r-3-1.txt:5000 packets transmitted, 673 received, 86% packet loss, time 560927ms
r-4-0.txt:5000 packets transmitted, 3601 received, 27% packet loss, time 117639ms
r-4-1.txt:5000 packets transmitted, 505 received, +57 errors, 89% packet loss, time 350492ms
r-4-2.txt:5000 packets transmitted, 0 received, 100% packet loss, time 52053ms
r-4-3.txt:5000 packets transmitted, 0 received, 100% packet loss, time 51942ms
r-5-1.txt:5000 packets transmitted, 0 received, 100% packet loss, time 52014ms
All other combination didnt work at all (no network)

 

So 0-0 seems to fit, unfortunatelly.

 

Next step is to check the cpu-freq...

Share this post


Link to post
Share on other sites

r-0-0.txt:5000 packets transmitted, 3479 received, 30% packet loss, time 66099ms

...

r-2-0.txt:5000 packets transmitted, 3540 received, 29% packet loss, time 193752ms

...

r-4-0.txt:5000 packets transmitted, 3601 received, 27% packet loss, time 117639ms

...

All other combination didnt work at all (no network)

 

So 0-0 seems to fit, unfortunatelly.

 

Interesting. But TX4/RX0 seems to be even better?

 

I will test with my Lime2 maybe this evening or tomorrow. I just let another automated test with the Lamobo R1 run (using 3 times a 10 sec iperf run and using the performance CPU governor): http://pastebin.com/VzWxpepX

 

The results seem to be on Lamobo R1:

  • TX delays 0, 1 and 2 don't work at all.
  • TX throughput maxes out at 370 Mbits/sec (100% CPU utilisation @ 960 MHz)
  • RX throughput maxes out at 460 Mbits/sec (100% CPU utilisation @ 960 MHz)
  • Manipulating RX delay does matter regarding TX throughput
  • clock speed as well as cpufreq settings (especially governor + scaling_min_freq) directly influence performance/benchmarks

Really looking forward to test with Lime2 or Banana Pi. Maybe I'll increase possible clock speeds from 960 MHz to 1008 or even 1200 MHz in arch/arm/boot/dts/sun7i-a20.dtsi when results of the combination GMAC+RTL8211 look promising to get closer to the limits.

Share this post


Link to post
Share on other sites

Hm, not really, the time is 117s instead of 66ms. And it differs between runs.

 

It's independ from cpu freq. Performance-Governor and speed of 100MHz, 500, or 1000MHz get the same result.

Share this post


Link to post
Share on other sites

I did also the tests in between. There are just 2 combinations that work somewhat reliable with my Lime2:

 

0/0: 600 Mbits/sec TX, 370 Mbits/sec RX, packet losses: 41% 27.0% (TX/RX)

3/0: 580 Mbits/sec TX, 355 Mbits/sec RX, packet losses: 39% 25.5% (TX/RX)

 

The results vary a lot which seems to be a case for some sort of mismatch. Right now I'm testing a Banana Pi (the 'old' model, not the Pro/M1+) and with its default delay setting (3/0 and performance governor with 960 MHz and DRAM clocked with 480 MHz) it looks like:

 

TX: 700 Mbits/sec, 5000 packets transmitted, 3593 received, 28% packet loss, time 27440ms

rtt min/avg/max/mdev = 0.364/0.534/0.834/0.051 ms
 
RX: 800 Mbits/sec, 5000 packets transmitted, 5000 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.363/0.568/0.753/0.045 ms

Share this post


Link to post
Share on other sites

Thats means, that nearly all Lime2 have packet loss up to 25% ?!

Theoretically it should have a bad impact in performance (tcp stack has to resubmit packet, especially short messages get lost).

 

Is such a packet loss acceptable or not, if we want to sell such a product (and advertising GBit)?

Share this post


Link to post
Share on other sites

Thats means, that nearly all Lime2 have packet loss up to 25% ?!

 

I don't know how to interpret that correctly since your ping test is some sort of flooding, isn't it?

 

Regarding performance: As already mentioned yesterday evening I also let my Banana Pi run through the tests. The best settings for TX/RX delay were 3/0 (defaults):

TX 3, RX 0:

TX: 700 Mbits/sec, 5000 packets transmitted, 3593 received, 28% packet loss, time 27440ms
rtt min/avg/max/mdev = 0.364/0.534/0.834/0.051 ms
RX: 800 Mbits/sec, 5000 packets transmitted, 5000 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.363/0.568/0.753/0.045 ms

TX 3, RX 2:

TX: 650 Mbits/sec, 5000 packets transmitted, 3583 received, 28% packet loss, time 27449ms
rtt min/avg/max/mdev = 0.369/0.520/0.972/0.062 ms
RX: 700 Mbits/sec, 5000 packets transmitted, 4999 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.362/0.542/0.687/0.067 ms

TX 1, RX 0:

TX: 680 Mbits/sec, 5000 packets transmitted, 3596 received, 28% packet loss, time 27386ms
rtt min/avg/max/mdev = 0.374/0.530/0.892/0.058 ms
RX: 830 Mbits/sec, 5000 packets transmitted, 5000 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.362/0.557/1.713/0.057 ms

TX 1, RX 4:

TX: 650 Mbits/sec, 5000 packets transmitted, 0 received, 100% packet loss, time 50824ms
RX: 30.8 Kbits/sec, 5000 packets transmitted, 0 packets received, 100.0% packet loss

TX 2, RX 2:

TX: 640 Mbits/sec, 5000 packets transmitted, 3592 received, 28% packet loss, time 27469ms
rtt min/avg/max/mdev = 0.366/0.536/0.878/0.059 ms
RX: 800 Mbits/sec, 5000 packets transmitted, 4999 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.364/0.569/0.710/0.037 ms

The RX ping reports are from OS X 10.9.5 on a MacBook Pro. In TX direction more lost packets but also more throughput compared to Lime2.

Share this post


Link to post
Share on other sites

Ping calls it "flooding", but because it was defined at 1970 or so.

I send 5000 packets, one each 1ms. with 9000bytes per packet this makes 9MByte per second.

Share this post


Link to post
Share on other sites

Arrrrgh, now my other testlime reports nearly 0%:

 

root@demo13:~# ping -f -s 9000 -i 0.001 fritz.box
PING fritz.box (192.168.178.1) 9000(9028) bytes of data.
........^C
--- fritz.box ping statistics ---
5356 packets transmitted, 5348 received, 0% packet loss, time 21065ms
rtt min/avg/max/mdev = 2.892/3.234/10.723/0.314 ms, ipg/ewma 3.933/3.318 ms
 

 

What happend? I unplugged the akku.

A simple reboot doesnt seems to help.

Share this post


Link to post
Share on other sites

Is there a possibility to reboot the phy-chip?

Maybe with the gpio-pin?

Share this post


Link to post
Share on other sites

What happend? I unplugged the akku.

A simple reboot doesnt seems to help.

 

Interesting. A cold/warm/hot boot issue (there seem to be a couple of related issues, eg. memory calibration: http://linux-sunxi.org/A10_DRAM_Controller_Calibration)

 

Maybe the whole problem is related to DRAM (initialisation). IIRC Igor already included the a10-meminfo tool so it would be worth a look to compare dqs gating delay settings in both cases (please compare with http://irclog.whitequark.org/linux-sunxi/2015-04-27)

Share this post


Link to post
Share on other sites

Hi Dario,

 

unfortunatelly, only a workaround. If you limit the connection to 100MBit, its working very reliable without packet loss. (something like "ethtool eth0 100M-Base-FD").

With some bad boards you get a better connection than with 1G.

 

So apr. 25% of the board have loss time up to 5% (good for lime2), 50% up to 10% (you can work with it) and the remaining board are above 10% (100MBit is often better).

 

Latest kernel doesn't help. According to tkaiser, other boards seems to have the same behaviour (Lamobo R1 and Banana Pi), so its probable an A20 problem. Also changing the RX/TX-Parameter  or the cpu-frequency do not help.

 

We are considering to use another boards with another chips on the long run. We have also problems with the watchdog, which isnt working reliable, probably because the power chip switchs to a "safe state" under some cases.

Share this post


Link to post
Share on other sites

I am using my board by limiting speed via software with mii-tool. Unfortunately, 100 Mb/s is too slow for my use case. Anyway, thank you for your answer.

Share this post


Link to post
Share on other sites

Probably. With bad boards, I noticed that the transfer rate with mass transfers is nearly the top rate (25 MBytes/s), even with such a packet loss. Probably because the tcp stack notices the missing packets from ack's of later packets. With slow transfers (only some packets per seconds like typing into a command shell), the response is bad. So for every 10-25% of key strokes the response cames a second later, which is annoying for me. I'm also not sure, which speed is better...

Share this post


Link to post
Share on other sites

Ping calls it "flooding", but because it was defined at 1970 or so.

I send 5000 packets, one each 1ms. with 9000bytes per packet this makes 9MByte per second.

 

If I flood my PowerMac G5 localhost, I get 35% packet loss. Same thing if I flood my G4 server: 35% packet loss.  :wacko:

-So compared to that, your ARM boards are doing absolutely GREAT, even though your results might be unacceptable.  :D

Share this post


Link to post
Share on other sites

As per this thread: https://github.com/OLIMEX/OLINUXINO/issues/31it seems a fix was found: https://www.mail-archive.com/u-boot@lists.denx.de/msg206626.html , by setting CONFIG_GMAC_TX_DELAY=4. This might work for A20 Lime 2. Any chance to get an image with this setting?

 

BTW: You should start to read both the threads you're citing and the ones you post into. This thread here is 10 months old and was only about correct TX/RX settings in the beginning -- see above. And the the one you were referring to ended up rather quickly this way: https://www.mail-archive.com/u-boot@lists.denx.de/msg206767.html

 

Now it seems it was 'just' an autonegotiation problem with the RTL8211CL used on Olimex boards (nearly every other vendor using A20 used RTL8211D/E instead)

Share this post


Link to post
Share on other sites
Guest
This topic is now closed to further replies.
3 3