Jump to content

Spemerchina

Members
  • Posts

    8
  • Joined

  • Last visited

Posts posted by Spemerchina

  1. 2 hours ago, spqr said:

    I ordered eight of the v7 units from GS and six of them couldn't make 48 hours before a freeze or kernel panic. Two of them stay up indefinitely.

    I concluded it was a hardware problem and am returning the units.

     

    PS: this was with their stock ubuntu kernel...

    interesting. what was your application that needed so much bins?

  2. On 2/21/2019 at 4:57 AM, Jbobspants said:

    Good info, guys!

     

    @Spemerchina any chance you could post a link to that 10gb Asus card you mentioned?

     

    My single-port network card arrived from Amazon today, along with a half-size to full-size mPCIE bracket. Everything went together easily and was recognized by Armbian right away. As soon as I booted there was a new "enp0s0" device in "ip link", and with a little fiddling in /etc/systemd/network/ I was able to get it to come up at boot and pull an IP automatically.

      Reveal hidden contents

    File: /etc/systemd/network/10-enp0s0.network

    
    
    [Match]
    Name=enp0s0
    
    [Network]
    DHCP=ipv4 

     

    Not much time for testing today, unfortunately, and I'll need to bring home a managed switch from work in order to actually test etherchannel performance.

     

    I also ordered this 2-port card from Ebay. It's coming from China so it will be 3 weeks, minimum, but at under $40 I figure it's worth a chance. Of course there are no details on the chip or anything, so there's no telling how much fighting I'll have getting the drivers to work.

    Hi, just google ASUS XG-C100C .

  3. 10 hours ago, Jbobspants said:

     

    True, the onboard SATA has it's own direct path, and that's what some of tkaiser's benchmarks were testing...the onboard SATA port vs ports on his mPCIE SATA expansion board. It seems the onboard port is significantly faster, while the tests on the mPCIE SATA ports were limited to roughly 2Gbps. I'm not sure if the bottleneck was the cheap 88SE9215 SATA card, or the mPCIE path. I looked for specs on the mPCIE port, but was not able to determine how many PCIE lanes or what PCIE revision it uses (this might be common knowledge to the community, but I wasn't able to dig it up in a quick search).

     

    A 10Gbps card would be very cool, but in the same test, he came to the conclusion that overall speed using both onboard and mPCIE SATA simultaneously (in RAID 0/striping mode) was limited by the CPU. I know SATA w/software RAID and network benchmarks are very different, but in a real-world scenario, I doubt we'd be able to get anywhere near 10Gbps.

    image.png.6666a885e13bef33f292d08720f105a7.png

     

    It looks like we have PCIe 2.0 which supports 500/500MB simultaneous read/write bandwidth. There are some cheap Asus 10Gbe adapters that could possible use all the bandwidth available (read from SATA, write to PCIe and in the opposite direction), so we can see close to 400/400 out of 10Gbe port without RAID overhead. I'm not interested in RAID, just a JBOD performance :)

     

    Edit: Not sure about the exact wiring in Espressobin and capabilities of shared SERDES lanes among PCIe2.0 and SATA, some research says around 7.5Gbps total throughput, so I'll lower expectations to 3Gbe full duplex speeds with separte 10Gbe card in mPCI slot..

     

    I believe @tkaiser could say a word or two on the matter :)

     

     

     

  4. 12 hours ago, Jbobspants said:

    How about with a mPCIE network card?

     

    Currently there is Syba Gigabit Ethernet Mini PCI Express card on Amazon for about $17. That particular card only gets you one additional port, but I have seen options with two ports (although I haven't found any 2-port models anywhere near this price point).

     

    Would it be possible to use a gigabit port on the PCI expansion slot in an etherchannel with one of the ports of the built-in switch to achieve a 2gbps link? I haven't figured out how to configure those built-in switch ports to anything other than the 3-port bridge yet, but I wonder how limited our options are with that Topaz switch in the middle.

     

    If the 2-port gigabit expansion card wasn't so cost-prohibitive, I think the mPCIE slot would have the bandwidth to do 2gbps by itself. Judging by tkaiser's benchmarks of the mPCI SATA expansion board, it looks like he's hitting between 250,000 and nearly 300,000 kiloBytes/sec when using a single drive on the expansion board. Of course that's dependent on the drive and several other factors, but that gives us an upper limit of at least 1.9 to 2.3 gigabits/sec.

     

    And of course this all assumes you're not already using the mPCIE slot for more SATA ports. :-\

     

    not a bad idea :) By looking at http://espressobin.net/wp-content/uploads/2017/01/ESPRESSObin-V3-Hardware-Block-diagram-v3-1.pdf , one could imagine that we might get much higher than 2Gbps if we go 10Gbps PCI-E adapter route.. SATA has it's own direct lane to SoC..

  5. On 2/7/2019 at 3:20 PM, ManoftheSea said:

    It was my understanding that the whole Topaz switch has only the 1 Gbps connection to the SOC, and that all three ethernet ports were from that switch.  The labels for wan, lan0, and lan1 are just labels, on switch ports 1, 2, and 3.

     

    unfortunately you are right, Topaz is connected to SoC via RGMII (1Gbps) so no chance for port channel. 

  6. On 1/23/2019 at 7:35 PM, ManoftheSea said:

    I have an EspressoBINv7, I'm running Armbian 5.72 (Stretch) with kernel 4.19.  It appears to be supported.

     

    On 1/23/2019 at 7:26 AM, Igor said:


    Hard to say since there are so many different versions. I do have two (non v7) and both are the same ... Since we have most up2date firmware, I can speculate that yes.

     

    @Igor do you have a procedure for non-internal staff that would get v7 on the official support list? Maybe some of the guys could do the tests if what @ManoftheSea said is not enough.

     

    Thanks

  7. Here is the configuration of

     

    Cisco switch side:

     

    interface Port-channel2
     description LACP Channel for mk2
     switchport trunk encapsulation dot1q
     switchport trunk allowed vlan 1,2
     switchport mode trunk
     spanning-tree portfast trunk
    !
    interface GigabitEthernet1/0/23
     description mk2 eth0
     switchport trunk encapsulation dot1q
     switchport mode trunk
     channel-group 2 mode active
    !
    interface GigabitEthernet1/0/24
     description mk2 eth1
     switchport trunk encapsulation dot1q
     switchport mode trunk
     channel-group 2 mode active
    !

     

    armbian module config:

    # /etc/modules: kernel modules to load at boot time.
     bonding mode=4 miimon=100 lacp_rate=1

     

    armbian networking:

    #/etc/network/interfaces 
    # This file describes the network interfaces available on your system
    # and how to activate them. For more information, see interfaces(5).
     
    auto eth0
        iface eth0 inet manual
        bond-master bond0
     
    auto eth1
         iface eth1 inet manual
         bond-master bond0
     
    auto bond0
         iface bond0 inet static
         address 10.0.0.80
         gateway 10.0.0.1
         netmask 255.255.255.0
     
     
    bond-mode 802.3ad
    bond-miimon 100
    bond-lacp-rate 4
    bond-slaves none

    thanks

  8. Hi everyone, 

     

    especially armbian maintainers!

     

    I'd like to make use of great performance of this board regarding SATA performance which is crippled by 1Gbps so my question is, would etherchannel work on this hardware? I understand that the board has only 1Gbps from the switch (dual LAN ports) so I guess the only possible way to exceed 1Gbps output would be to bundle WAN && [LAN01|LAN02]. Anyone tried this or believe it should work? I don't have the board to try.

     

    Thanks.

×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines