• Content Count

  • Joined

  • Last visited

  1. ``` ~> uptime 17:09:17 up 19 days, 23:36, 1 user, load average: 0.10, 0.10, 0.14 ``` I use btrfs instead of OpenZFS.
  2. Weird. Are you saying those SD cards fail to boot with high speed support and succeed at normal? Weird thing is, I have a lexar SD card here that is not compatible (probably because it's UHS-1). My USB SD card reader can read it.
  3. Anyone know why it's not supported out of the box? The wiki provides a patch but does not mention why it's not upstream or in Armbian. The bus-width being 4 is also quite suspect. All the other Armada 38x devices have it at 8. Armada 37x ones have it at 4.
  4. Are you on the latest kernel?
  5. Yeah I know. I removed them. Simply posting for more info.
  6. Use this one:
  7. More info on DFS brokenness:
  8. A more performant alternative to AF_ALG... Not a solution to your issues unfortunately. IIRC, there was a way to overclock the helios to 2.0ghz or 1.8ghz. Don't remember the details.
  9. @karamikeThe issue has been found and fixed. You should update to a more recent kernel (latest legacy or current kernel). I'm on 5.9.14 right now. Works flawlessly.
  10. It should probably be tested beforehand. Again, he tested "iperf -P 4" with swconfig on an armada device. He doesn't remember if it was the armada 37x or 38x. Clearfog devices are only the latter I believe. edit: ehm I'm getting full speeds when I tested this. I think he tested this on the older 37x platform. edit2: second posibility: iperf from one device connected to the switch to another connected to the switch. I tested from the helios device itself to a router (mvebu 385 platform running OpenWrt).
  11. Unrelated but one of the patches that I removed seems to actually be useful. I just talked to Felix Fietkau and according to him, it seems the mvneta driver does not do any hardware queue configuration, leading to a nasty performance problem. Testable with iperf -P 4. Not iperf3. Although the equipment he tested on did have a switch. The Helios4 does not. I don't think the way I'm using mine it matters a whole lot though.
  12. Is it me or it the mv_xor driver being loaded late? ``` [ 0.007976] xor: measuring software checksum speed [ 0.164062] xor: using function: arm4regs (2534.000 MB/sec) [ 0.316068] raid6: neonx8 xor() 1096 MB/s [ 0.452063] raid6: neonx4 xor() 1378 MB/s [ 0.588069] raid6: neonx2 xor() 1610 MB/s [ 0.724066] raid6: neonx1 xor() 1346 MB/s [ 0.860075] raid6: int32x8 xor() 328 MB/s [ 0.996072] raid6: int32x4 xor() 369 MB/s [ 1.132082] raid6: int32x2 xor() 332 MB/s [ 1.268063] raid6: int32x1 xor() 285 MB/s [ 1