Jump to content

Recommended Posts

Posted

Hi everyone, 

 

especially armbian maintainers!

 

I'd like to make use of great performance of this board regarding SATA performance which is crippled by 1Gbps so my question is, would etherchannel work on this hardware? I understand that the board has only 1Gbps from the switch (dual LAN ports) so I guess the only possible way to exceed 1Gbps output would be to bundle WAN && [LAN01|LAN02]. Anyone tried this or believe it should work? I don't have the board to try.

 

Thanks.

Posted

Here is the configuration of

 

Cisco switch side:

 

interface Port-channel2
 description LACP Channel for mk2
 switchport trunk encapsulation dot1q
 switchport trunk allowed vlan 1,2
 switchport mode trunk
 spanning-tree portfast trunk
!
interface GigabitEthernet1/0/23
 description mk2 eth0
 switchport trunk encapsulation dot1q
 switchport mode trunk
 channel-group 2 mode active
!
interface GigabitEthernet1/0/24
 description mk2 eth1
 switchport trunk encapsulation dot1q
 switchport mode trunk
 channel-group 2 mode active
!

 

armbian module config:

# /etc/modules: kernel modules to load at boot time.
 bonding mode=4 miimon=100 lacp_rate=1

 

armbian networking:

#/etc/network/interfaces 
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
 
auto eth0
    iface eth0 inet manual
    bond-master bond0
 
auto eth1
     iface eth1 inet manual
     bond-master bond0
 
auto bond0
     iface bond0 inet static
     address 10.0.0.80
     gateway 10.0.0.1
     netmask 255.255.255.0
 
 
bond-mode 802.3ad
bond-miimon 100
bond-lacp-rate 4
bond-slaves none

thanks

Posted

It was my understanding that the whole Topaz switch has only the 1 Gbps connection to the SOC, and that all three ethernet ports were from that switch.  The labels for wan, lan0, and lan1 are just labels, on switch ports 1, 2, and 3.

Posted
  On 2/7/2019 at 2:20 PM, ManoftheSea said:

It was my understanding that the whole Topaz switch has only the 1 Gbps connection to the SOC, and that all three ethernet ports were from that switch.  The labels for wan, lan0, and lan1 are just labels, on switch ports 1, 2, and 3.

Expand  

 

unfortunately you are right, Topaz is connected to SoC via RGMII (1Gbps) so no chance for port channel. 

Posted

How about with a mPCIE network card?

 

Currently there is Syba Gigabit Ethernet Mini PCI Express card on Amazon for about $17. That particular card only gets you one additional port, but I have seen options with two ports (although I haven't found any 2-port models anywhere near this price point).

 

Would it be possible to use a gigabit port on the PCI expansion slot in an etherchannel with one of the ports of the built-in switch to achieve a 2gbps link? I haven't figured out how to configure those built-in switch ports to anything other than the 3-port bridge yet, but I wonder how limited our options are with that Topaz switch in the middle.

 

If the 2-port gigabit expansion card wasn't so cost-prohibitive, I think the mPCIE slot would have the bandwidth to do 2gbps by itself. Judging by tkaiser's benchmarks of the mPCI SATA expansion board, it looks like he's hitting between 250,000 and nearly 300,000 kiloBytes/sec when using a single drive on the expansion board. Of course that's dependent on the drive and several other factors, but that gives us an upper limit of at least 1.9 to 2.3 gigabits/sec.

 

And of course this all assumes you're not already using the mPCIE slot for more SATA ports. :-\

Posted
  On 2/14/2019 at 10:23 PM, Jbobspants said:

How about with a mPCIE network card?

 

Currently there is Syba Gigabit Ethernet Mini PCI Express card on Amazon for about $17. That particular card only gets you one additional port, but I have seen options with two ports (although I haven't found any 2-port models anywhere near this price point).

 

Would it be possible to use a gigabit port on the PCI expansion slot in an etherchannel with one of the ports of the built-in switch to achieve a 2gbps link? I haven't figured out how to configure those built-in switch ports to anything other than the 3-port bridge yet, but I wonder how limited our options are with that Topaz switch in the middle.

 

If the 2-port gigabit expansion card wasn't so cost-prohibitive, I think the mPCIE slot would have the bandwidth to do 2gbps by itself. Judging by tkaiser's benchmarks of the mPCI SATA expansion board, it looks like he's hitting between 250,000 and nearly 300,000 kiloBytes/sec when using a single drive on the expansion board. Of course that's dependent on the drive and several other factors, but that gives us an upper limit of at least 1.9 to 2.3 gigabits/sec.

 

And of course this all assumes you're not already using the mPCIE slot for more SATA ports. :-\

Expand  

 

not a bad idea :) By looking at http://espressobin.net/wp-content/uploads/2017/01/ESPRESSObin-V3-Hardware-Block-diagram-v3-1.pdf , one could imagine that we might get much higher than 2Gbps if we go 10Gbps PCI-E adapter route.. SATA has it's own direct lane to SoC..

Posted
  On 2/15/2019 at 10:33 AM, Spemerchina said:

 

not a bad idea :) By looking at http://espressobin.net/wp-content/uploads/2017/01/ESPRESSObin-V3-Hardware-Block-diagram-v3-1.pdf , one could imagine that we might get much higher than 2Gbps if we go 10Gbps PCI-E adapter route.. SATA has it's own direct lane to SoC..

Expand  

 

True, the onboard SATA has it's own direct path, and that's what some of tkaiser's benchmarks were testing...the onboard SATA port vs ports on his mPCIE SATA expansion board. It seems the onboard port is significantly faster, while the tests on the mPCIE SATA ports were limited to roughly 2Gbps. I'm not sure if the bottleneck was the cheap 88SE9215 SATA card, or the mPCIE path. I looked for specs on the mPCIE port, but was not able to determine how many PCIE lanes or what PCIE revision it uses (this might be common knowledge to the community, but I wasn't able to dig it up in a quick search).

 

A 10Gbps card would be very cool, but in the same test, he came to the conclusion that overall speed using both onboard and mPCIE SATA simultaneously (in RAID 0/striping mode) was limited by the CPU. I know SATA w/software RAID and network benchmarks are very different, but in a real-world scenario, I doubt we'd be able to get anywhere near 10Gbps.

Posted
  On 2/18/2019 at 8:50 PM, Jbobspants said:

 

True, the onboard SATA has it's own direct path, and that's what some of tkaiser's benchmarks were testing...the onboard SATA port vs ports on his mPCIE SATA expansion board. It seems the onboard port is significantly faster, while the tests on the mPCIE SATA ports were limited to roughly 2Gbps. I'm not sure if the bottleneck was the cheap 88SE9215 SATA card, or the mPCIE path. I looked for specs on the mPCIE port, but was not able to determine how many PCIE lanes or what PCIE revision it uses (this might be common knowledge to the community, but I wasn't able to dig it up in a quick search).

 

A 10Gbps card would be very cool, but in the same test, he came to the conclusion that overall speed using both onboard and mPCIE SATA simultaneously (in RAID 0/striping mode) was limited by the CPU. I know SATA w/software RAID and network benchmarks are very different, but in a real-world scenario, I doubt we'd be able to get anywhere near 10Gbps.

Expand  

image.png.6666a885e13bef33f292d08720f105a7.png

 

It looks like we have PCIe 2.0 which supports 500/500MB simultaneous read/write bandwidth. There are some cheap Asus 10Gbe adapters that could possible use all the bandwidth available (read from SATA, write to PCIe and in the opposite direction), so we can see close to 400/400 out of 10Gbe port without RAID overhead. I'm not interested in RAID, just a JBOD performance :)

 

Edit: Not sure about the exact wiring in Espressobin and capabilities of shared SERDES lanes among PCIe2.0 and SATA, some research says around 7.5Gbps total throughput, so I'll lower expectations to 3Gbe full duplex speeds with separte 10Gbe card in mPCI slot..

 

I believe @tkaiser could say a word or two on the matter :)

 

 

 

Posted
  On 2/19/2019 at 6:48 AM, Spemerchina said:

Edit: Not sure about the exact wiring in Espressobin and capabilities of shared SERDES lanes among PCIe2.0 and SATA, some research says around 7.5Gbps total throughput, so I'll lower expectations to 3Gbe full duplex speeds with separte 10Gbe card in mPCI slot..

 

I believe @tkaiser could say a word or two on the matter :)

 

 

 

Expand  

You have to make a choice with the shared SERDES lanes.

Espressobin has already made that for you with it's PCB layout - one lane goes to SATA, one to USB3 and one to PCIe.

There is only one lane on the PCIe slot, which is also apparently limited to 2.5Gb in early boards due issues running at PCI 2 speeds.

The switch is connected via a 1Gb link to the 3270.

 

Your physical limitations will be 2.5* or 5Gb on the PCIe slot and 3Gb on the SATA 2 connection. The 3Gb link uses 8b/10b encoding, so it's limited to 300MB/s.

 

 

* OpenWRT has reduce PCIe speed to 1.0 

https://git.openwrt.org/?p=openwrt/openwrt.git;a=commitdiff;h=772258044b48036699302840abf96cd34c4e5078

 

Posted

Good info, guys!

 

@Spemerchina any chance you could post a link to that 10gb Asus card you mentioned?

 

My single-port network card arrived from Amazon today, along with a half-size to full-size mPCIE bracket. Everything went together easily and was recognized by Armbian right away. As soon as I booted there was a new "enp0s0" device in "ip link", and with a little fiddling in /etc/systemd/network/ I was able to get it to come up at boot and pull an IP automatically.

  Reveal hidden contents

Not much time for testing today, unfortunately, and I'll need to bring home a managed switch from work in order to actually test etherchannel performance.

 

I also ordered this 2-port card from Ebay. It's coming from China so it will be 3 weeks, minimum, but at under $40 I figure it's worth a chance. Of course there are no details on the chip or anything, so there's no telling how much fighting I'll have getting the drivers to work.

Posted

So I haven't been able to successfully get two ports into a bond yet, seems to be an issue with the Topaz switch. I'm not sure if there's a different configuration needed because of the way those three built-in interfaces are somehow sub-interfaces of eth0, I've never worked with anything in this configuration before. Basically the single port on the mPCIE card joins the bond and I get an IP address from DHCP on the bond interface, but I can't get any other interfaces to join the bond. I even tried creating a bond with lan0 and lan1 together (thinking those ports are more similar and might work better for some reason), but neither of them come up, and neither join the bond. Does anyone have a suggestion, considering the way this switch is managed?

 

I also tried a USB 3 gigabit adapter as a second port for testing, but I'm having weird driver issues. I'm using an ASIX AX88179 module which is listed as supported, but after a reboot it never comes up (no link light, although it works fine on my Fedora laptop). It seems to be recognized by the kernel, dmesg recognizes the device and creates an interface, but it just won't come online or show that it's connected. I tried unplugging/replugging the module a few times, once it actually came up on USB2 and worked fine (presumably at USB2 speeds), but I haven't been able to replicate that. Dmesg output from that time is below.

 

 

  Reveal hidden contents

 

Of particular note is the module is recognized at boot (2.192970), I unplug it at 226.216153, plug it back in at 234.823791, there's an issue with the device configuration and it apparently powercycles USB3, then recognizes it as USB2:

 

[ 235.071789] usb 3-1: new SuperSpeed Gen 1 USB device number 4 using xhci-hcd

[ 235.095080] usb 3-1: No LPM exit latency info found, disabling LPM.

[ 235.095140] usb 3-1: no configurations

[ 235.095169] usb 3-1: can't read configurations, error -22

[ 235.123870] usb usb3-port1: attempt power cycle

[ 236.327372] usb 2-1: new high-speed USB device number 4 using xhci-hcd

 

I'm not sure how normal that sequence is, I haven't used USB much on these SBCs, and I'm not really sure where to look next to try to get the USB3 adapter working at gigabit speeds.

 

Anyway, now I'm kinda stuck on etherchannel testing until I can solve one of the below problems:

  1. Figure out how to configure one of the Topaz switch ports in a bonded interface
  2. Figure out the USB3 driver issues on my USB module
  3. My 2-port mPCIE card arrives in a few weeks (and I can sort out any driver issues with it)

I'm definitely open to suggestions, and I will post any breakthroughs I might have.

Posted
  On 2/26/2019 at 9:53 PM, Jbobspants said:

So I haven't been able to successfully get two ports into a bond yet

Expand  

Why are you trying to bond two ports?

There is only a 1Gb link between the switch and the SoC, you're not going to get any more bandwidth

Posted

Firs of all I admire everyone's quest for speed.   Finding every bit of performance available in these boards is a lot of fun.....  But allow me to provide a a brief sermon on link aggregation to manage expectations for those who many not have much experience.

 

So I've gone down the NIC bonding rabbit hole many times on many pieces of equipment.   I even have LACP trunks going to my garage on principle.    It's really really hard to get a performance payoff on bonded gig links.    Single TCP streams are still only sent down 1 link at a time.     So things like iperf testing, most basic file IO tests etc, won't even send traffic on more than 1 link.   The smarter hashing algorithms will load balance the links to a degree typically based on MAC or IP:port for service.   Aka servers with dozens and dozens of clients work well because the bandwidth accross the 2 links can be distributed.  Even protocols like NFS 4.1 and SMB3.0 that have concepts of parallism still don't perform well with just 2 endpoints.    Typically it's been backup servers that are being absolutely pounded at the same time by many nodes at night, or VM hosts that I've ever gotten over 1 gig of traffic on LACP.

 

The best performance I've been able to get using multiple gig links is by properly impementing iscsi with several initiators on a node and multipathing.   Since iSCSI is multipathing SCSI IO calls and not tcp it's able to deal wit the out of order stuff and you can really hammer your links.

 

Best performance bet probably is using 10G and maybe you'll hit a few gigsabits on that.   IMHO I'd just focus on shaving latency down on your services and keep your networking stack simple.    If performance is really an issue... i'd probaly just brute force with a junky intel box.. or by a more purpose built board.

 

PS in defense of the 1 gig SGMII lane using for the topaz switch to the CPU.... that's still 1GIG of full duplex traffic.. aka.. that's 1GIG of potential packet forwarding/filtering between WAN and LAN on the topaz.   In theory it's more potent than my edge router lite....  i'll have to test one day.

 

 

Posted (edited)
  On 2/27/2019 at 1:26 AM, chrisf said:

Why are you trying to bond two ports?

There is only a 1Gb link between the switch and the SoC, you're not going to get any more bandwidth

Expand  

Yes, like @ManoftheSea pointed out above, we are aware that the three onboard ports have a 1gig bottleneck at the SoC. However, with a 1gbps mPCIE card in addition to one of the built-int ports, there is a theoretical 2gbps path. I was only testing the lan0 + lan1 etherchannel group with the sole purpose of trying to get any of the built-in ports to show up in an etherchannel/bond interface (unsuccessfully, so far).

 

@lanefu, I appreciate your experience and insight, and I'd say you're right-on with your assessment of "typical" traffic going to a home fileserver. You have a good point to level-set for anyone coming across this thread, and this most certainly wouldn't be a great option for a HTPC or home media server. From my perspective, however, the Espressobin has a [theoretical] 6gbps path to SATA, but it's a shame the network bottlenecks to 1gbps. In my hypothetical use-case, there would be several hosts simultaneously accessing data on this server. Each of those hosts would have a single 1gig NIC, so of course there would be no point in going above that speed for any single connection. However, as more and more hosts try to access this share a the same time, a handful of 1gig clients could easily exceed a single interface, so an etherchannel would make sense.

 

Also, my experience comes from a work environment where a single link is a potential single point of failure. We don't install anything without redundant links, and an etherchannel is a great way to allow for automated failover in the even of one link failing without having to run some additional heartbeat software. I realize that's probably not something your typical home user is concerned with, but IMO it would be cool to have.

 

One final note along the lines of @lanefu's post, I should mention that in all my tests so far with dual 1gig links to the Espressobin, I am hitting a CPU bottleneck before ever getting close to 2gbps network speed. Obviously I don't have an etherchannel working yet (and I really don't know how that will affect CPU utilization for network throughput), but with one of the built-in ports on VLAN A, and the mPCIE port on VLAN B, using four other test boxes (2 on VLAN B, 2 on VLAN A), all doing reads, writes, or simultaneous reads and writes, I have been unable to achieve much over 100MB/sec total before the Espressobin cores both peg at 100%. I've done a few tests with NFS exports, and a few with NBD exports, but all my tests so far have been limited by the CPU on the server. I'll continue to test and tweak my setup, but at this point, I'm not sure this is the right platform for a high speed NAS server.

 

All that said, I'm still trying to figure out how to add one of the built-in ports to a bond interface... Any suggestions would be greatly appreciated! :D

 

Edit: Just to clarify, 1gbps (gigabits per second) is about 125MB/sec, minus overhead. 2gbps would be about 250MB/sec, give or take.

 

Edited by Jbobspants
Additional clarification
Posted

I feel like you would already know this, but, are you removing the ports from the bridge before you try to bond them?

 

Can you share your config files for how you're trying to bond?  With systemd-networkd?

Posted
  On 2/21/2019 at 3:57 AM, Jbobspants said:

Good info, guys!

 

@Spemerchina any chance you could post a link to that 10gb Asus card you mentioned?

 

My single-port network card arrived from Amazon today, along with a half-size to full-size mPCIE bracket. Everything went together easily and was recognized by Armbian right away. As soon as I booted there was a new "enp0s0" device in "ip link", and with a little fiddling in /etc/systemd/network/ I was able to get it to come up at boot and pull an IP automatically.

  Reveal hidden contents

Not much time for testing today, unfortunately, and I'll need to bring home a managed switch from work in order to actually test etherchannel performance.

 

I also ordered this 2-port card from Ebay. It's coming from China so it will be 3 weeks, minimum, but at under $40 I figure it's worth a chance. Of course there are no details on the chip or anything, so there's no telling how much fighting I'll have getting the drivers to work.

Expand  

Hi, just google ASUS XG-C100C .

Posted
  On 2/27/2019 at 4:50 AM, Spemerchina said:

Hi, just google ASUS XG-C100C .

Expand  

Ah, OK that's a PCIE x4 card, I don't think we'll have much luck with it in an Espressobin mPCIE x1 slot. Might be fun to get a third 10gig port on a Machiatobin, though ;)

Posted
  On 2/27/2019 at 4:38 AM, ManoftheSea said:

I feel like you would already know this, but, are you removing the ports from the bridge before you try to bond them?

 

Can you share your config files for how you're trying to bond?  With systemd-networkd?

Expand  

 

Thanks for the nudge... from your post I assume it should just work like any other port. And yes, I had removed the port from the bridge before trying to add it to the bond. :)

 

Ok, so I went ahead and did a fresh install of the latest official download, 4.19.20-mvebu64. Reconfigured from scratch, and sure enough, I was able to get the "wan" port and my mPCIE "enp0s0" to both join bonded port. Things don't work exactly as I expect, though. From a fresh reboot, for whatever reason the wan port doesn't automatically join the bond, but the enp0s0 port does come up. If I do "systemctl restart systemd-networkd", it comes back up with both ports in the bond. I've also tried a shut/no shut on the switch, but it only comes back up with enp0s0, wan is still not in the bond until I restart systemd-networkd.

 

Full config from /etc/systemd/network, output of "ip addr" and "cat /proc/net/bonding/bond1" (before and after restarting systemd-networkd) are below.

 

  Reveal hidden contents

 

On a related note, my USB3 network adapter is still not getting link light after a reboot, but that is less of an issue now that I have 2 other ports successfully joined to the bond.

Posted

@Jbobspants I think the switch ports are dependent on the eth0 port being up first, maybe it has something to do with timing/ordering?

Posted

hmm, that reminds me that I have a similar problem, when the system comes up, the wan port doesn't pull an address (ipv4 or ipv6) until I restart systemd-networkd.  I think chrisf is on to something about timing.

 

Posted

That's a very good possibility, guys. I assume it's something a bit more complex than just renaming the files in the network directory to change the order?

 

I tried this:

garrettj@espressobin:~$ la /etc/systemd/network/

total 48
drwxr-xr-x 2 root root 4096 Feb 28 18:01 .
drwxr-xr-x 5 root root 4096 Feb 28 15:09 ..
-rw-r--r-- 1 root root   30 Feb  9 16:29 10-br0.netdev
-rw-r--r-- 1 root root   88 Feb 27 16:58 10-br0.network
-rw-r--r-- 1 root root   40 Feb  9 16:29 10-eth0.network
-rw-r--r-- 1 root root   40 Feb  9 16:29 10-lan0.network
-rw-r--r-- 1 root root   40 Feb  9 16:29 10-lan1.network
-rw-r--r-- 1 root root  118 Feb 27 16:18 20-bond1.netdev
-rw-r--r-- 1 root root   79 Feb 28 17:21 20-bond1.network
-rw-r--r-- 1 root root   42 Feb 27 16:25 20-enp0s0.network
-rw-r--r-- 1 root root   40 Feb 27 16:24 20-wan.network

And just for the heck of it, I also tried renaming 10-eth0.network to 01-eth0.network too, but neither seems to make a difference.

 

I assume it's something a bit lower-level than this, but I'm really not sure what to look at.

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines