Jump to content

Jbobspants

Members
  • Posts

    8
  • Joined

  • Last visited

Reputation Activity

  1. Like
    Jbobspants got a reaction from Spemerchina in Espressobin - etherchannel?   
    Good info, guys!
     
    @Spemerchina any chance you could post a link to that 10gb Asus card you mentioned?
     
    My single-port network card arrived from Amazon today, along with a half-size to full-size mPCIE bracket. Everything went together easily and was recognized by Armbian right away. As soon as I booted there was a new "enp0s0" device in "ip link", and with a little fiddling in /etc/systemd/network/ I was able to get it to come up at boot and pull an IP automatically.
    Not much time for testing today, unfortunately, and I'll need to bring home a managed switch from work in order to actually test etherchannel performance.
     
    I also ordered this 2-port card from Ebay. It's coming from China so it will be 3 weeks, minimum, but at under $40 I figure it's worth a chance. Of course there are no details on the chip or anything, so there's no telling how much fighting I'll have getting the drivers to work.
  2. Like
    Jbobspants got a reaction from Spemerchina in Espressobin - etherchannel?   
    Thanks for the nudge... from your post I assume it should just work like any other port. And yes, I had removed the port from the bridge before trying to add it to the bond.
     
    Ok, so I went ahead and did a fresh install of the latest official download, 4.19.20-mvebu64. Reconfigured from scratch, and sure enough, I was able to get the "wan" port and my mPCIE "enp0s0" to both join bonded port. Things don't work exactly as I expect, though. From a fresh reboot, for whatever reason the wan port doesn't automatically join the bond, but the enp0s0 port does come up. If I do "systemctl restart systemd-networkd", it comes back up with both ports in the bond. I've also tried a shut/no shut on the switch, but it only comes back up with enp0s0, wan is still not in the bond until I restart systemd-networkd.
     
    Full config from /etc/systemd/network, output of "ip addr" and "cat /proc/net/bonding/bond1" (before and after restarting systemd-networkd) are below.
     
     
    On a related note, my USB3 network adapter is still not getting link light after a reboot, but that is less of an issue now that I have 2 other ports successfully joined to the bond.
  3. Like
    Jbobspants got a reaction from lanefu in Espressobin - etherchannel?   
    Yes, like @ManoftheSea pointed out above, we are aware that the three onboard ports have a 1gig bottleneck at the SoC. However, with a 1gbps mPCIE card in addition to one of the built-int ports, there is a theoretical 2gbps path. I was only testing the lan0 + lan1 etherchannel group with the sole purpose of trying to get any of the built-in ports to show up in an etherchannel/bond interface (unsuccessfully, so far).
     
    @lanefu, I appreciate your experience and insight, and I'd say you're right-on with your assessment of "typical" traffic going to a home fileserver. You have a good point to level-set for anyone coming across this thread, and this most certainly wouldn't be a great option for a HTPC or home media server. From my perspective, however, the Espressobin has a [theoretical] 6gbps path to SATA, but it's a shame the network bottlenecks to 1gbps. In my hypothetical use-case, there would be several hosts simultaneously accessing data on this server. Each of those hosts would have a single 1gig NIC, so of course there would be no point in going above that speed for any single connection. However, as more and more hosts try to access this share a the same time, a handful of 1gig clients could easily exceed a single interface, so an etherchannel would make sense.
     
    Also, my experience comes from a work environment where a single link is a potential single point of failure. We don't install anything without redundant links, and an etherchannel is a great way to allow for automated failover in the even of one link failing without having to run some additional heartbeat software. I realize that's probably not something your typical home user is concerned with, but IMO it would be cool to have.
     
    One final note along the lines of @lanefu's post, I should mention that in all my tests so far with dual 1gig links to the Espressobin, I am hitting a CPU bottleneck before ever getting close to 2gbps network speed. Obviously I don't have an etherchannel working yet (and I really don't know how that will affect CPU utilization for network throughput), but with one of the built-in ports on VLAN A, and the mPCIE port on VLAN B, using four other test boxes (2 on VLAN B, 2 on VLAN A), all doing reads, writes, or simultaneous reads and writes, I have been unable to achieve much over 100MB/sec total before the Espressobin cores both peg at 100%. I've done a few tests with NFS exports, and a few with NBD exports, but all my tests so far have been limited by the CPU on the server. I'll continue to test and tweak my setup, but at this point, I'm not sure this is the right platform for a high speed NAS server.
     
    All that said, I'm still trying to figure out how to add one of the built-in ports to a bond interface... Any suggestions would be greatly appreciated!
     
    Edit: Just to clarify, 1gbps (gigabits per second) is about 125MB/sec, minus overhead. 2gbps would be about 250MB/sec, give or take.
     
  4. Like
    Jbobspants got a reaction from Spemerchina in Espressobin - etherchannel?   
    Yes, like @ManoftheSea pointed out above, we are aware that the three onboard ports have a 1gig bottleneck at the SoC. However, with a 1gbps mPCIE card in addition to one of the built-int ports, there is a theoretical 2gbps path. I was only testing the lan0 + lan1 etherchannel group with the sole purpose of trying to get any of the built-in ports to show up in an etherchannel/bond interface (unsuccessfully, so far).
     
    @lanefu, I appreciate your experience and insight, and I'd say you're right-on with your assessment of "typical" traffic going to a home fileserver. You have a good point to level-set for anyone coming across this thread, and this most certainly wouldn't be a great option for a HTPC or home media server. From my perspective, however, the Espressobin has a [theoretical] 6gbps path to SATA, but it's a shame the network bottlenecks to 1gbps. In my hypothetical use-case, there would be several hosts simultaneously accessing data on this server. Each of those hosts would have a single 1gig NIC, so of course there would be no point in going above that speed for any single connection. However, as more and more hosts try to access this share a the same time, a handful of 1gig clients could easily exceed a single interface, so an etherchannel would make sense.
     
    Also, my experience comes from a work environment where a single link is a potential single point of failure. We don't install anything without redundant links, and an etherchannel is a great way to allow for automated failover in the even of one link failing without having to run some additional heartbeat software. I realize that's probably not something your typical home user is concerned with, but IMO it would be cool to have.
     
    One final note along the lines of @lanefu's post, I should mention that in all my tests so far with dual 1gig links to the Espressobin, I am hitting a CPU bottleneck before ever getting close to 2gbps network speed. Obviously I don't have an etherchannel working yet (and I really don't know how that will affect CPU utilization for network throughput), but with one of the built-in ports on VLAN A, and the mPCIE port on VLAN B, using four other test boxes (2 on VLAN B, 2 on VLAN A), all doing reads, writes, or simultaneous reads and writes, I have been unable to achieve much over 100MB/sec total before the Espressobin cores both peg at 100%. I've done a few tests with NFS exports, and a few with NBD exports, but all my tests so far have been limited by the CPU on the server. I'll continue to test and tweak my setup, but at this point, I'm not sure this is the right platform for a high speed NAS server.
     
    All that said, I'm still trying to figure out how to add one of the built-in ports to a bond interface... Any suggestions would be greatly appreciated!
     
    Edit: Just to clarify, 1gbps (gigabits per second) is about 125MB/sec, minus overhead. 2gbps would be about 250MB/sec, give or take.
     
  5. Like
    Jbobspants got a reaction from ManoftheSea in Espressobin - etherchannel?   
    Yes, like @ManoftheSea pointed out above, we are aware that the three onboard ports have a 1gig bottleneck at the SoC. However, with a 1gbps mPCIE card in addition to one of the built-int ports, there is a theoretical 2gbps path. I was only testing the lan0 + lan1 etherchannel group with the sole purpose of trying to get any of the built-in ports to show up in an etherchannel/bond interface (unsuccessfully, so far).
     
    @lanefu, I appreciate your experience and insight, and I'd say you're right-on with your assessment of "typical" traffic going to a home fileserver. You have a good point to level-set for anyone coming across this thread, and this most certainly wouldn't be a great option for a HTPC or home media server. From my perspective, however, the Espressobin has a [theoretical] 6gbps path to SATA, but it's a shame the network bottlenecks to 1gbps. In my hypothetical use-case, there would be several hosts simultaneously accessing data on this server. Each of those hosts would have a single 1gig NIC, so of course there would be no point in going above that speed for any single connection. However, as more and more hosts try to access this share a the same time, a handful of 1gig clients could easily exceed a single interface, so an etherchannel would make sense.
     
    Also, my experience comes from a work environment where a single link is a potential single point of failure. We don't install anything without redundant links, and an etherchannel is a great way to allow for automated failover in the even of one link failing without having to run some additional heartbeat software. I realize that's probably not something your typical home user is concerned with, but IMO it would be cool to have.
     
    One final note along the lines of @lanefu's post, I should mention that in all my tests so far with dual 1gig links to the Espressobin, I am hitting a CPU bottleneck before ever getting close to 2gbps network speed. Obviously I don't have an etherchannel working yet (and I really don't know how that will affect CPU utilization for network throughput), but with one of the built-in ports on VLAN A, and the mPCIE port on VLAN B, using four other test boxes (2 on VLAN B, 2 on VLAN A), all doing reads, writes, or simultaneous reads and writes, I have been unable to achieve much over 100MB/sec total before the Espressobin cores both peg at 100%. I've done a few tests with NFS exports, and a few with NBD exports, but all my tests so far have been limited by the CPU on the server. I'll continue to test and tweak my setup, but at this point, I'm not sure this is the right platform for a high speed NAS server.
     
    All that said, I'm still trying to figure out how to add one of the built-in ports to a bond interface... Any suggestions would be greatly appreciated!
     
    Edit: Just to clarify, 1gbps (gigabits per second) is about 125MB/sec, minus overhead. 2gbps would be about 250MB/sec, give or take.
     
  6. Like
    Jbobspants got a reaction from Spemerchina in Espressobin - etherchannel?   
    How about with a mPCIE network card?
     
    Currently there is Syba Gigabit Ethernet Mini PCI Express card on Amazon for about $17. That particular card only gets you one additional port, but I have seen options with two ports (although I haven't found any 2-port models anywhere near this price point).
     
    Would it be possible to use a gigabit port on the PCI expansion slot in an etherchannel with one of the ports of the built-in switch to achieve a 2gbps link? I haven't figured out how to configure those built-in switch ports to anything other than the 3-port bridge yet, but I wonder how limited our options are with that Topaz switch in the middle.
     
    If the 2-port gigabit expansion card wasn't so cost-prohibitive, I think the mPCIE slot would have the bandwidth to do 2gbps by itself. Judging by tkaiser's benchmarks of the mPCI SATA expansion board, it looks like he's hitting between 250,000 and nearly 300,000 kiloBytes/sec when using a single drive on the expansion board. Of course that's dependent on the drive and several other factors, but that gives us an upper limit of at least 1.9 to 2.3 gigabits/sec.
     
    And of course this all assumes you're not already using the mPCIE slot for more SATA ports. :-\
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines