Jump to content

Arglebargle

Members
  • Posts

    7
  • Joined

  • Last visited

Everything posted by Arglebargle

  1. What's memory performance like on mainline vs 4.4? I've been catching myself up on the state of mainline Rockchip support this morning and stumbled across this: https://github.com/rockchip-linux/u-boot/issues/34. I don't have an unused board to boot and test myself but if indeed DDR clocks are being set low by u-boot and ram performance is no bueno it might be worth a known issue mention when/if mainline is accepted for support.
  2. Ah, good call, I'll definitely play with pinning processes to different cores while I test. I usually keep netdata running while benching to have some indication of what's going on with the system. I've had trouble getting the board to recognize any of my Mellanox cards (I've tested 3 different models) so I'm not sure what's going on there. There's nothing suspicious in dmesg as far as I can tell either, the cards just aren't detected. I suppose I should probably dump a boot log and see if ayufan or anyone else knows what's wrong. The RP64 is a rev 2.1 board, so the problem isn't the pcie 3.3v supply resistor that needed to be removed on prior batches. The rest of my bargain basement 10GbE ebay gear arrived this week, once I learn a little more about how to work with these Brocade switches I should be able to get some performance numbers for a 4x1Gb bonded connection. Yeah, I tend to agree. Is irqbalance a potential solution?
  3. Yeah, I'm planning to post a few. I just got my enterprise switch with 4x SFP+ ports in this week and I'm waiting on a couple more 10GbE cards so I can test at >1Gb speeds. FWIW 10GbE cards are dirt cheap on ebay if you don't mind SFP+. I picked up a couple of Mellanox ConnectX-2s and a DAC to connect them for around $50 total. Also, it's not widely known but their hardware is basically identical across models that share the same port configuration each generation. You can buy cheap infiniband cards that no-one seems to want and cross flash them with the VPI model firmware that supports both Ethernet and Infiniband. Out of curiosity does anyone know if iperf in server mode can handle more than one client at once, or do I need to run one copy per client on separate ports?
  4. Thanks! Pcie seems to be working fine on the most recent 4.4 kernel, the Intel nic I tested linked at 5GT 4x without issue. root@rockpro64:~# uname -a Linux rockpro64 4.4.132-1075-rockchip-ayufan-ga83beded8524 #1 SMP Thu Jul 26 08:22:22 UTC 2018 aarch64 aarch64 aarch64 GNU/Linux LnkSta: Speed 5GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
  5. My July batch board arrived a couple of days ago, what process are people following to build a bootable image? I'd like to run some tests with an Intel i350-T4 and a Mellanox 10Gb card.
  6. That is really strange. I'll get out one of my spare boards and see if I can replicate/diagnose. Sorry for the hassle! Quick Q: What's the lowest kernel version that will run this script? My implementation is pretty different since I'm >=4.4 on all of my boards. There's no need to use per cpu zram devices in that case, the kernel allocates 1 stream per core without needing multiple devices these days. I think <3.15 is the cutoff for that feature.
  7. Can someone answer a naive question about the new zram implementation? I think I've found something important that's been overlooked. # Use half of the real memory by default --> 1/${ram_divisor} ram_divisor=2 mem_info=$(LC_ALL=C free -w 2>/dev/null | grep "^Mem" || LC_ALL=C free | grep "^Mem") memory_total=$(awk '{printf("%d",$2*1024)}' <<<${mem_info}) mem_per_zram_device=$(( ${memory_total} / ${zram_devices} / ${ram_divisor} )) for (( i=1; i<=zram_devices; i++ )); do [[ -f /sys/block/zram${i}/comp_algorithm ]] && echo lz4 >/sys/block/zram${i}/comp_algorithm 2>/dev/null echo -n ${mem_per_zram_device} > /sys/block/zram${i}/disksize mkswap /dev/zram${i} swapon -p 5 /dev/zram${i} done echo -e "\n### Activated ${zram_devices} zram swap devices with ${mem_per_zram_device} MB each\n" >>${Log} Is the intent here to devote half of real memory to zram and take advantage of compression to multiply that chunk of memory, or is the intent to provide some small swap space without hitting swap on flash storage? Right now the script is doing the latter, not the former. The current implementation makes swap space equal in size to half of ram, which is then compressed (usually at 2:1 or 3:1, most often at 3:1) and stored in memory. This means that, on average, the amount of actual ram used by zram hovers between 25% and ~17% rather than the apparently intended 50%. I spent quite a bit of time researching and profiling something similar for my own use and I've settled on the following to allocate 50% of real memory to zram: mem_per_zram_device=$(( 3 * ${memory_total} / ${zram_devices} / ${ram_divisor} )) Coincidentally this is the same zram allocation that Google makes on ChromeOS. I've seen the same mistake made in almost every single zram script that I can find via Google so this definitely isn't an Armbian thing. It looks like someone made the original logic error at some point a long time ago and it's been copy-pasted into every zram setup script for the last 7 or 8 years. For what it's worth setting /sys/block/zramN/mem_limit does what everyone seems to be expecting and limits the amount of ram that the zram device can use. Unfortunately there is no way to adaptively vary /sys/block/zramN/disksize according to the actual compression ratio so the currently accepted best practice seems to be assuming 3:1 and punting (this is what Google does on ChromeOS.)
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines