Jump to content

Jbobspants

Members
  • Posts

    8
  • Joined

  • Last visited

  1. That's a very good possibility, guys. I assume it's something a bit more complex than just renaming the files in the network directory to change the order? I tried this: garrettj@espressobin:~$ la /etc/systemd/network/ total 48 drwxr-xr-x 2 root root 4096 Feb 28 18:01 . drwxr-xr-x 5 root root 4096 Feb 28 15:09 .. -rw-r--r-- 1 root root 30 Feb 9 16:29 10-br0.netdev -rw-r--r-- 1 root root 88 Feb 27 16:58 10-br0.network -rw-r--r-- 1 root root 40 Feb 9 16:29 10-eth0.network -rw-r--r-- 1 root root 40 Feb 9 16:29 10-lan0.network -rw-r--r-- 1 root root 40 Feb 9 16:29 10-lan1.network -rw-r--r-- 1 root root 118 Feb 27 16:18 20-bond1.netdev -rw-r--r-- 1 root root 79 Feb 28 17:21 20-bond1.network -rw-r--r-- 1 root root 42 Feb 27 16:25 20-enp0s0.network -rw-r--r-- 1 root root 40 Feb 27 16:24 20-wan.network And just for the heck of it, I also tried renaming 10-eth0.network to 01-eth0.network too, but neither seems to make a difference. I assume it's something a bit lower-level than this, but I'm really not sure what to look at.
  2. Thanks for the nudge... from your post I assume it should just work like any other port. And yes, I had removed the port from the bridge before trying to add it to the bond. Ok, so I went ahead and did a fresh install of the latest official download, 4.19.20-mvebu64. Reconfigured from scratch, and sure enough, I was able to get the "wan" port and my mPCIE "enp0s0" to both join bonded port. Things don't work exactly as I expect, though. From a fresh reboot, for whatever reason the wan port doesn't automatically join the bond, but the enp0s0 port does come up. If I do "systemctl restart systemd-networkd", it comes back up with both ports in the bond. I've also tried a shut/no shut on the switch, but it only comes back up with enp0s0, wan is still not in the bond until I restart systemd-networkd. Full config from /etc/systemd/network, output of "ip addr" and "cat /proc/net/bonding/bond1" (before and after restarting systemd-networkd) are below. On a related note, my USB3 network adapter is still not getting link light after a reboot, but that is less of an issue now that I have 2 other ports successfully joined to the bond.
  3. Ah, OK that's a PCIE x4 card, I don't think we'll have much luck with it in an Espressobin mPCIE x1 slot. Might be fun to get a third 10gig port on a Machiatobin, though
  4. Yes, like @ManoftheSea pointed out above, we are aware that the three onboard ports have a 1gig bottleneck at the SoC. However, with a 1gbps mPCIE card in addition to one of the built-int ports, there is a theoretical 2gbps path. I was only testing the lan0 + lan1 etherchannel group with the sole purpose of trying to get any of the built-in ports to show up in an etherchannel/bond interface (unsuccessfully, so far). @lanefu, I appreciate your experience and insight, and I'd say you're right-on with your assessment of "typical" traffic going to a home fileserver. You have a good point to level-set for anyone coming across this thread, and this most certainly wouldn't be a great option for a HTPC or home media server. From my perspective, however, the Espressobin has a [theoretical] 6gbps path to SATA, but it's a shame the network bottlenecks to 1gbps. In my hypothetical use-case, there would be several hosts simultaneously accessing data on this server. Each of those hosts would have a single 1gig NIC, so of course there would be no point in going above that speed for any single connection. However, as more and more hosts try to access this share a the same time, a handful of 1gig clients could easily exceed a single interface, so an etherchannel would make sense. Also, my experience comes from a work environment where a single link is a potential single point of failure. We don't install anything without redundant links, and an etherchannel is a great way to allow for automated failover in the even of one link failing without having to run some additional heartbeat software. I realize that's probably not something your typical home user is concerned with, but IMO it would be cool to have. One final note along the lines of @lanefu's post, I should mention that in all my tests so far with dual 1gig links to the Espressobin, I am hitting a CPU bottleneck before ever getting close to 2gbps network speed. Obviously I don't have an etherchannel working yet (and I really don't know how that will affect CPU utilization for network throughput), but with one of the built-in ports on VLAN A, and the mPCIE port on VLAN B, using four other test boxes (2 on VLAN B, 2 on VLAN A), all doing reads, writes, or simultaneous reads and writes, I have been unable to achieve much over 100MB/sec total before the Espressobin cores both peg at 100%. I've done a few tests with NFS exports, and a few with NBD exports, but all my tests so far have been limited by the CPU on the server. I'll continue to test and tweak my setup, but at this point, I'm not sure this is the right platform for a high speed NAS server. All that said, I'm still trying to figure out how to add one of the built-in ports to a bond interface... Any suggestions would be greatly appreciated! Edit: Just to clarify, 1gbps (gigabits per second) is about 125MB/sec, minus overhead. 2gbps would be about 250MB/sec, give or take.
  5. So I haven't been able to successfully get two ports into a bond yet, seems to be an issue with the Topaz switch. I'm not sure if there's a different configuration needed because of the way those three built-in interfaces are somehow sub-interfaces of eth0, I've never worked with anything in this configuration before. Basically the single port on the mPCIE card joins the bond and I get an IP address from DHCP on the bond interface, but I can't get any other interfaces to join the bond. I even tried creating a bond with lan0 and lan1 together (thinking those ports are more similar and might work better for some reason), but neither of them come up, and neither join the bond. Does anyone have a suggestion, considering the way this switch is managed? I also tried a USB 3 gigabit adapter as a second port for testing, but I'm having weird driver issues. I'm using an ASIX AX88179 module which is listed as supported, but after a reboot it never comes up (no link light, although it works fine on my Fedora laptop). It seems to be recognized by the kernel, dmesg recognizes the device and creates an interface, but it just won't come online or show that it's connected. I tried unplugging/replugging the module a few times, once it actually came up on USB2 and worked fine (presumably at USB2 speeds), but I haven't been able to replicate that. Dmesg output from that time is below. Of particular note is the module is recognized at boot (2.192970), I unplug it at 226.216153, plug it back in at 234.823791, there's an issue with the device configuration and it apparently powercycles USB3, then recognizes it as USB2: [ 235.071789] usb 3-1: new SuperSpeed Gen 1 USB device number 4 using xhci-hcd [ 235.095080] usb 3-1: No LPM exit latency info found, disabling LPM. [ 235.095140] usb 3-1: no configurations [ 235.095169] usb 3-1: can't read configurations, error -22 [ 235.123870] usb usb3-port1: attempt power cycle [ 236.327372] usb 2-1: new high-speed USB device number 4 using xhci-hcd I'm not sure how normal that sequence is, I haven't used USB much on these SBCs, and I'm not really sure where to look next to try to get the USB3 adapter working at gigabit speeds. Anyway, now I'm kinda stuck on etherchannel testing until I can solve one of the below problems: Figure out how to configure one of the Topaz switch ports in a bonded interface Figure out the USB3 driver issues on my USB module My 2-port mPCIE card arrives in a few weeks (and I can sort out any driver issues with it) I'm definitely open to suggestions, and I will post any breakthroughs I might have.
  6. Good info, guys! @Spemerchina any chance you could post a link to that 10gb Asus card you mentioned? My single-port network card arrived from Amazon today, along with a half-size to full-size mPCIE bracket. Everything went together easily and was recognized by Armbian right away. As soon as I booted there was a new "enp0s0" device in "ip link", and with a little fiddling in /etc/systemd/network/ I was able to get it to come up at boot and pull an IP automatically. Not much time for testing today, unfortunately, and I'll need to bring home a managed switch from work in order to actually test etherchannel performance. I also ordered this 2-port card from Ebay. It's coming from China so it will be 3 weeks, minimum, but at under $40 I figure it's worth a chance. Of course there are no details on the chip or anything, so there's no telling how much fighting I'll have getting the drivers to work.
  7. True, the onboard SATA has it's own direct path, and that's what some of tkaiser's benchmarks were testing...the onboard SATA port vs ports on his mPCIE SATA expansion board. It seems the onboard port is significantly faster, while the tests on the mPCIE SATA ports were limited to roughly 2Gbps. I'm not sure if the bottleneck was the cheap 88SE9215 SATA card, or the mPCIE path. I looked for specs on the mPCIE port, but was not able to determine how many PCIE lanes or what PCIE revision it uses (this might be common knowledge to the community, but I wasn't able to dig it up in a quick search). A 10Gbps card would be very cool, but in the same test, he came to the conclusion that overall speed using both onboard and mPCIE SATA simultaneously (in RAID 0/striping mode) was limited by the CPU. I know SATA w/software RAID and network benchmarks are very different, but in a real-world scenario, I doubt we'd be able to get anywhere near 10Gbps.
  8. How about with a mPCIE network card? Currently there is Syba Gigabit Ethernet Mini PCI Express card on Amazon for about $17. That particular card only gets you one additional port, but I have seen options with two ports (although I haven't found any 2-port models anywhere near this price point). Would it be possible to use a gigabit port on the PCI expansion slot in an etherchannel with one of the ports of the built-in switch to achieve a 2gbps link? I haven't figured out how to configure those built-in switch ports to anything other than the 3-port bridge yet, but I wonder how limited our options are with that Topaz switch in the middle. If the 2-port gigabit expansion card wasn't so cost-prohibitive, I think the mPCIE slot would have the bandwidth to do 2gbps by itself. Judging by tkaiser's benchmarks of the mPCI SATA expansion board, it looks like he's hitting between 250,000 and nearly 300,000 kiloBytes/sec when using a single drive on the expansion board. Of course that's dependent on the drive and several other factors, but that gives us an upper limit of at least 1.9 to 2.3 gigabits/sec. And of course this all assumes you're not already using the mPCIE slot for more SATA ports. :-\
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines