Jump to content

raid6 numbers meaning


@lex

Recommended Posts

Out of curiosity, i have two different builds for the same board, burned it to the same sdcard and ran on the same board and the raid6 numbers are twice as fast for one case.

Can anyone give some explanation what this numbers mean (if that means something running a SBC with sdcard)? I am curious to know what settings (or wrong settings) would give such a boost on this?

 

Build one:

Spoiler

[    0.625230] raid6: int32x1  gen()   189 MB/s
[    0.795324] raid6: int32x1  xor()   169 MB/s
[    0.965393] raid6: int32x2  gen()   204 MB/s
[    1.135559] raid6: int32x2  xor()   166 MB/s
[    1.305616] raid6: int32x4  gen()   186 MB/s
[    1.475707] raid6: int32x4  xor()   138 MB/s
[    1.645902] raid6: int32x8  gen()   150 MB/s
[    1.815993] raid6: int32x8  xor()   111 MB/s
[    1.985984] raid6: neonx1   gen()   492 MB/s
[    2.156147] raid6: neonx1   xor()   375 MB/s
[    2.326253] raid6: neonx2   gen()   660 MB/s
[    2.496311] raid6: neonx2   xor()   481 MB/s
[    2.666444] raid6: neonx4   gen()   714 MB/s
[    2.836481] raid6: neonx4   xor()   508 MB/s
[    3.006635] raid6: neonx8   gen()   485 MB/s
[    3.176747] raid6: neonx8   xor()   378 MB/s
[    3.176752] raid6: using algorithm neonx4 gen() 714 MB/s
[    3.176755] raid6: .... xor() 508 MB/s, rmw enabled
[    3.176758] raid6: using intx1 recovery algorithm
[    3.178152] SCSI subsystem initialized

 

Build two:

Spoiler

[    0.669363] raid6: int32x1  gen()    90 MB/s
[    0.839099] raid6: int32x1  xor()    76 MB/s
[    1.009438] raid6: int32x2  gen()   121 MB/s
[    1.179528] raid6: int32x2  xor()    92 MB/s
[    1.349628] raid6: int32x4  gen()   123 MB/s
[    1.520081] raid6: int32x4  xor()    91 MB/s
[    1.690468] raid6: int32x8  gen()   117 MB/s
[    1.860516] raid6: int32x8  xor()    82 MB/s
[    2.030483] raid6: neonx1   gen()   235 MB/s
[    2.200598] raid6: neonx1   xor()   174 MB/s
[    2.370813] raid6: neonx2   gen()   316 MB/s
[    2.540996] raid6: neonx2   xor()   227 MB/s
[    2.711182] raid6: neonx4   gen()   378 MB/s
[    2.881357] raid6: neonx4   xor()   253 MB/s
[    3.051596] raid6: neonx8   gen()   335 MB/s
[    3.221758] raid6: neonx8   xor()   231 MB/s
[    3.221766] raid6: using algorithm neonx4 gen() 378 MB/s
[    3.221772] raid6: .... xor() 253 MB/s, rmw enabled
[    3.221779] raid6: using intx1 recovery algorithm
[    3.224888] SCSI subsystem initialized

 

Link to comment
Share on other sites

20 minutes ago, tkaiser said:

What are the differences between the builds and which board are you talking about?

duo, same defconfig, but seems the numbers are higher (faster?) when Jmicron is not detected (maybe wrong DTS), maybe more power available to the board soc, thus faster numbers?

 

[    0.060968] xor: measuring software checksum speed
[    0.158147]    arm4regs  :   598.000 MB/sec
[    0.258271]    8regs     :   356.400 MB/sec
[    0.358397]    32regs    :   364.800 MB/sec
[    0.458522]    neon      :   605.600 MB/sec
 

[    0.046909] xor: measuring software checksum speed
[    0.144802]    arm4regs  :  1260.000 MB/sec
[    0.244866]    8regs     :   750.800 MB/sec
[    0.344929]    32regs    :   874.800 MB/sec
[    0.444993]    neon      :   735.200 MB/sec
 

Link to comment
Share on other sites

25 minutes ago, @lex said:

maybe more power available to the board soc, thus faster numbers?

 

This is something that could really happen on Raspberry Pis when powering is insufficient since the firmware can do frequency capping in such a situation (then a RPi 3 is being forced to 600 MHz while it still happily lies to you it would run at 1200 MHz but everything is half as fast). Fortunately Armbian does not Raspberries and clones :)

 

We have on every board three different stages where CPU clockspeeds are set. Sunxi next example below:

  1. u-boot: defaults or per board CONFIG_SYS_CLK_FREQ setting (here you could get 480 vs 1008 MHz depending on default or not)
  2. kernel: depending on what's configured wrt default cpufreq governor (CONFIG_CPU_FREQ_DEFAULT_GOV_*) either u-boot settings are still active or now the cpufreq OPP defined in the device-tree if existing and working cpufreq driver applied to kernel will take over depending on the chosen governor (with performance you would be now at 1200 MHz, with powersave at 240 MHz and with other governors somewhere in between)
  3. As soon as the cpufrequtils service is started the settings that are in Armbian's build system defined are applied

At early boot we're talking about stage 2 above so there are a lot of possibilities why performance differs a lot within the first seconds when kernel has just been started.

 

Link to comment
Share on other sites

54 minutes ago, tkaiser said:

u-boot: defaults or per board CONFIG_SYS_CLK_FREQ setting (here you could get 480 vs 1008 MHz depending on default or not)

Good hint, both build with Kernel is 4.13.12 with same defconfig, but u-boot is different, FE u-boot vs Mainline u-boot, that explains a lot. Thanks.

Link to comment
Share on other sites

34 minutes ago, zador.blood.stained said:

Assuming we don't care about breaking pre-initrd btrfs images with old boot scripts

 

Well, haven't thought about this. Then we'll live with it since initrd support came this year but we had reports from people relying on btrfs for rootfs already before.

Link to comment
Share on other sites

I am not sure if i understand you correctly, do you mean we can't disable Btrfs due to dependencies (or have Btrfs as a  module if possible) in kernel 4.13.12 and above?

Unless there have been significant  changes from 4.11 to 4.13 but i can see kernel 4.11 there is no raid6 algorithm , so no ~3 secs delay.

 

Sorry if i did not understand, i am doing some research on 4.13 and 4.14 for the H2+....

 

See cnxsoft kernel log:

https://www.cnx-software.com/2017/10/30/nanopi-duo-quick-start-guide-ubuntu-breadboard-mini-shield-msata-ssd/

 

 

Link to comment
Share on other sites

14 minutes ago, @lex said:

I am not sure if i understand you correctly, do you mean we can't disable Btrfs due to dependencies (or have Btrfs as a  module if possible) in kernel 4.13.12 and above?

We don't want to switch btrfs to module because it will break images using btrfs as root and not using initrd, even though it's probably less than 1% of our user base.

Link to comment
Share on other sites

42 minutes ago, @lex said:

do you mean we can't disable Btrfs due to dependencies (or have Btrfs as a  module if possible)

 

No, the 'dependency' stuff was meant to explain that RAID6_PQ might get activated by other stuff too.

 

Personally I would prefer to simply patch away the btrfs RAID6_PQ dependency on all Armbian kernels since it makes absolutely no sense and means just allowing users doing stupid things (playing btrfs RAID5/6 which is still considered experimental and makes exactly no sense at all on all boards except maybe Helios4)

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines