Jump to content

Software RAID testing Rockpi4 Marvell 4 port sata


Stuart Naylor

Recommended Posts

Hi I am just doing some tests on a rockpi4b-2gb with a marvell 9235 sata controller and 4x Integral p5 120gb ssds.

Purely out of interest as was expecting the card to bottleneck, but generally thought push it to the max and see how things go.

I am just running on the default Radxa   Debian-stretch-4.4 using mdadm for a start.

Looking for benchmark mark tips & tricks and what I should be outputting, so we can have a look for curiosity sake.

Currently syncing a Raid10
 

rock@rockpi4:~$ cat /proc/mdstat
Personalities : [raid0] [raid10]
md0 : active raid10 sdd[3] sdc[2] sdb[1] sda[0]
      234309632 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
      [===========>.........]  resync = 55.4% (129931520/234309632) finish=8.5min speed=202624K/sec
      bitmap: 2/2 pages [8KB], 65536KB chunk

 

Jul  4 13:46:45 rockpi4 kernel: [   75.172062] md0: detected capacity change from 479866126336 to 0
Jul  4 13:46:45 rockpi4 kernel: [   75.172628] md: md0 stopped.
Jul  4 13:46:45 rockpi4 kernel: [   75.173397] md: unbind<sda>
Jul  4 13:46:45 rockpi4 kernel: [   75.190852] md: export_rdev(sda)
Jul  4 13:46:45 rockpi4 kernel: [   75.191282] md: unbind<sdd>
Jul  4 13:46:45 rockpi4 kernel: [   75.206849] md: export_rdev(sdd)
Jul  4 13:46:45 rockpi4 kernel: [   75.207325] md: unbind<sdb>
Jul  4 13:46:45 rockpi4 udisksd[565]: Unable to resolve /sys/devices/virtual/block/md0/md/dev-sdb/block symlink
Jul  4 13:46:45 rockpi4 kernel: [   75.239056] md: export_rdev(sdb)
Jul  4 13:46:45 rockpi4 kernel: [   75.239439] md: unbind<sdc>
Jul  4 13:46:45 rockpi4 kernel: [   75.254837] md: export_rdev(sdc)
Jul  4 13:47:12 rockpi4 kernel: [  102.258308]  sdc: sdc1 sdc2
Jul  4 13:47:12 rockpi4 kernel: [  102.288150]  sdc: sdc1 sdc2
Jul  4 13:48:09 rockpi4 kernel: [  159.300017] md: bind<sda>
Jul  4 13:48:09 rockpi4 kernel: [  159.308923] md: bind<sdb>
Jul  4 13:48:09 rockpi4 kernel: [  159.319055] md: bind<sdc>
Jul  4 13:48:09 rockpi4 kernel: [  159.320188] md: bind<sdd>
Jul  4 13:48:09 rockpi4 kernel: [  159.326830] md/raid0:md0: md_size is 937238528 sectors.
Jul  4 13:48:09 rockpi4 kernel: [  159.327314] md: RAID0 configuration for md0 - 1 zone
Jul  4 13:48:09 rockpi4 kernel: [  159.327759] md: zone0=[sda/sdb/sdc/sdd]
Jul  4 13:48:09 rockpi4 kernel: [  159.328165]       zone-offset=         0KB, device-offset=         0KB, size= 468619264KB
Jul  4 13:48:09 rockpi4 kernel: [  159.328937]  sdc: sdc1 sdc2
Jul  4 13:48:09 rockpi4 kernel: [  159.329369] 
Jul  4 13:48:09 rockpi4 kernel: [  159.330145] md0: detected capacity change from 0 to 479866126336
Jul  4 13:48:09 rockpi4 udisksd[565]: Error creating watch for file /sys/devices/virtual/block/md0/md/sync_action: No such file or directory (g-file-error-quark, 4)
Jul  4 13:48:09 rockpi4 udisksd[565]: Error creating watch for file /sys/devices/virtual/block/md0/md/degraded: No such file or directory (g-file-error-quark, 4)
Jul  4 13:49:40 rockpi4 kernel: [  250.355809] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
Jul  4 13:55:31 rockpi4 kernel: [  601.335494] panel disable
Jul  4 14:02:26 rockpi4 anacron[1047]: Anacron 2.3 started on 2019-07-04
Jul  4 14:02:26 rockpi4 anacron[1047]: Normal exit (0 jobs run)
Jul  4 14:02:59 rockpi4 kernel: [ 1049.309314] md0: detected capacity change from 479866126336 to 0
Jul  4 14:02:59 rockpi4 kernel: [ 1049.309886] md: md0 stopped.
Jul  4 14:02:59 rockpi4 kernel: [ 1049.310176] md: unbind<sdd>
Jul  4 14:02:59 rockpi4 kernel: [ 1049.327147] md: export_rdev(sdd)
Jul  4 14:02:59 rockpi4 kernel: [ 1049.327821] md: unbind<sdc>
Jul  4 14:02:59 rockpi4 kernel: [ 1049.350959] md: export_rdev(sdc)
Jul  4 14:02:59 rockpi4 kernel: [ 1049.351512] md: unbind<sdb>
Jul  4 14:02:59 rockpi4 udisksd[565]: Unable to resolve /sys/devices/virtual/block/md0/md/dev-sdb/block symlink
Jul  4 14:02:59 rockpi4 kernel: [ 1049.366971] md: export_rdev(sdb)
Jul  4 14:02:59 rockpi4 kernel: [ 1049.367513] md: unbind<sda>
Jul  4 14:02:59 rockpi4 kernel: [ 1049.383124] md: export_rdev(sda)
Jul  4 14:03:21 rockpi4 kernel: [ 1071.066678]  sdc: sdc1 sdc2
Jul  4 14:03:21 rockpi4 kernel: [ 1071.092394]  sdc: sdc1 sdc2
Jul  4 14:05:23 rockpi4 kernel: [ 1193.551804] md: bind<sda>
Jul  4 14:05:23 rockpi4 kernel: [ 1193.552267]  sdc: sdc1 sdc2
Jul  4 14:05:23 rockpi4 kernel: [ 1193.552547] md: bind<sdb>
Jul  4 14:05:23 rockpi4 kernel: [ 1193.553780] md: bind<sdc>
Jul  4 14:05:23 rockpi4 kernel: [ 1193.554266] md: bind<sdd>
Jul  4 14:05:23 rockpi4 kernel: [ 1193.570556] md: raid10 personality registered for level 10
Jul  4 14:05:23 rockpi4 kernel: [ 1193.573138] md/raid10:md0: not clean -- starting background reconstruction
Jul  4 14:05:23 rockpi4 kernel: [ 1193.573765] md/raid10:md0: active with 4 out of 4 devices
Jul  4 14:05:23 rockpi4 kernel: [ 1193.575635] created bitmap (2 pages) for device md0
Jul  4 14:05:23 rockpi4 kernel: [ 1193.578102] md0: bitmap initialized from disk: read 1 pages, set 3576 of 3576 bits
Jul  4 14:05:23 rockpi4 kernel: [ 1193.581797] md0: detected capacity change from 0 to 239933063168
Jul  4 14:05:23 rockpi4 kernel: [ 1193.583297] md: md0 switched to read-write mode.
Jul  4 14:05:23 rockpi4 kernel: [ 1193.588652] md: resync of RAID array md0
Jul  4 14:05:23 rockpi4 kernel: [ 1193.589019] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
Jul  4 14:05:23 rockpi4 kernel: [ 1193.589541] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
Jul  4 14:05:23 rockpi4 kernel: [ 1193.590381] md: using 128k window, over a total of 234309632k.
Jul  4 14:25:02 rockpi4 kernel: [ 2372.292473] md: md0: resync done.
Jul  4 14:25:02 rockpi4 kernel: [ 2372.452970] RAID10 conf printout:
Jul  4 14:25:02 rockpi4 kernel: [ 2372.452989]  --- wd:4 rd:4
Jul  4 14:25:02 rockpi4 kernel: [ 2372.452998]  disk 0, wo:0, o:1, dev:sda
Jul  4 14:25:02 rockpi4 kernel: [ 2372.453005]  disk 1, wo:0, o:1, dev:sdb
Jul  4 14:25:02 rockpi4 kernel: [ 2372.453012]  disk 2, wo:0, o:1, dev:sdc
Jul  4 14:25:02 rockpi4 kernel: [ 2372.453019]  disk 3, wo:0, o:1, dev:sdd
Jul  4 14:30:45 rockpi4 kernel: [ 2715.470782] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)

 

        Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    20355    28533    43513    44568    22556    28612
          102400      16    60835    71891   111520   107540    66074    71640
          102400     512   149988   129385   253123   263113   211684   131649
          102400    1024   161360   164943   274007   275765   253893   165764
          102400   16384   181646   182851   338294   347395   342601   176768


*************************************************************************************************

RAID5

 

rock@rockpi4:~$ sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: /dev/sdc appears to be part of a raid array:
       level=raid0 devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: partition table exists on /dev/sdc but will be lost or
       meaningless after creating array
mdadm: size set to 117154816K
mdadm: automatically enabling write-intent bitmap on large array
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
rock@rockpi4:~$ cat /proc/mdstat
Personalities : [raid10] [raid6] [raid5] [raid4]
md0 : active raid5 sdd[4] sdc[2] sdb[1] sda[0]
      351464448 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
      [>....................]  recovery =  1.6% (1898560/117154816) finish=19.2min speed=99924K/sec
      bitmap: 0/1 pages [0KB], 65536KB chunk

 

Jul  4 14:49:52 rockpi4 kernel: [  491.913061] md: bind<sda>
Jul  4 14:49:52 rockpi4 kernel: [  491.913784] md: bind<sdb>
Jul  4 14:49:52 rockpi4 kernel: [  491.914381] md: bind<sdc>
Jul  4 14:49:52 rockpi4 kernel: [  491.914971] md: bind<sdd>
Jul  4 14:49:52 rockpi4 kernel: [  491.920396]  sdc: sdc1 sdc2
Jul  4 14:49:52 rockpi4 kernel: [  491.929530] async_tx: api initialized (async)
Jul  4 14:49:52 rockpi4 kernel: [  491.952339] md: raid6 personality registered for level 6
Jul  4 14:49:52 rockpi4 kernel: [  491.952833] md: raid5 personality registered for level 5
Jul  4 14:49:52 rockpi4 kernel: [  491.953316] md: raid4 personality registered for level 4
Jul  4 14:49:52 rockpi4 kernel: [  491.959926] md/raid:md0: device sdc operational as raid disk 2
Jul  4 14:49:52 rockpi4 kernel: [  491.960484] md/raid:md0: device sdb operational as raid disk 1
Jul  4 14:49:52 rockpi4 kernel: [  491.961025] md/raid:md0: device sda operational as raid disk 0
Jul  4 14:49:52 rockpi4 kernel: [  491.962943] md/raid:md0: allocated 4384kB
Jul  4 14:49:52 rockpi4 kernel: [  491.964488] md/raid:md0: raid level 5 active with 3 out of 4 devices, algorithm 2
Jul  4 14:49:52 rockpi4 kernel: [  491.965161] RAID conf printout:
Jul  4 14:49:52 rockpi4 kernel: [  491.965169]  --- level:5 rd:4 wd:3
Jul  4 14:49:52 rockpi4 kernel: [  491.965177]  disk 0, o:1, dev:sda
Jul  4 14:49:52 rockpi4 kernel: [  491.965183]  disk 1, o:1, dev:sdb
Jul  4 14:49:52 rockpi4 kernel: [  491.965188]  disk 2, o:1, dev:sdc
Jul  4 14:49:52 rockpi4 kernel: [  491.965603] created bitmap (1 pages) for device md0
Jul  4 14:49:52 rockpi4 kernel: [  491.966746] md0: bitmap initialized from disk: read 1 pages, set 1788 of 1788 bits
Jul  4 14:49:52 rockpi4 kernel: [  491.968765] md0: detected capacity change from 0 to 359899594752
Jul  4 14:49:52 rockpi4 kernel: [  491.969465] md: md0 switched to read-write mode.
Jul  4 14:49:52 rockpi4 kernel: [  491.969930] RAID conf printout:
Jul  4 14:49:52 rockpi4 kernel: [  491.969951]  --- level:5 rd:4 wd:3
Jul  4 14:49:52 rockpi4 kernel: [  491.969968]  disk 0, o:1, dev:sda
Jul  4 14:49:52 rockpi4 kernel: [  491.969984]  disk 1, o:1, dev:sdb
Jul  4 14:49:52 rockpi4 kernel: [  491.969997]  disk 2, o:1, dev:sdc
Jul  4 14:49:52 rockpi4 kernel: [  491.970009]  disk 3, o:1, dev:sdd
Jul  4 14:49:52 rockpi4 kernel: [  491.980149] md: recovery of RAID array md0
Jul  4 14:49:52 rockpi4 kernel: [  491.980523] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
Jul  4 14:49:52 rockpi4 kernel: [  491.981044] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
Jul  4 14:49:52 rockpi4 kernel: [  491.981894] md: using 128k window, over a total of 117154816k.
Jul  4 14:51:41 rockpi4 kernel: [  601.050246] panel disable
Jul  4 15:00:30 rockpi4 anacron[1052]: Anacron 2.3 started on 2019-07-04
Jul  4 15:00:30 rockpi4 anacron[1052]: Normal exit (0 jobs run)
Jul  4 15:05:53 rockpi4 kernel: [ 1453.287257] md: md0: recovery done.
Jul  4 15:05:53 rockpi4 kernel: [ 1453.567652] RAID conf printout:
Jul  4 15:05:53 rockpi4 kernel: [ 1453.567661]  --- level:5 rd:4 wd:4
Jul  4 15:05:53 rockpi4 kernel: [ 1453.567666]  disk 0, o:1, dev:sda
Jul  4 15:05:53 rockpi4 kernel: [ 1453.567670]  disk 1, o:1, dev:sdb
Jul  4 15:05:53 rockpi4 kernel: [ 1453.567674]  disk 2, o:1, dev:sdc
Jul  4 15:05:53 rockpi4 kernel: [ 1453.567677]  disk 3, o:1, dev:sdd
Jul  4 15:07:07 rockpi4 kernel: [ 1527.108599] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)

 

        Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4     8159     8947    43789    42643    24543    10212
          102400      16    33078    40985    98244    98407    70763    41851
          102400     512    52870    53418   212184   202157   203772    50657
          102400    1024    66426    69555   250660   250200   249607    69539
          102400   16384   108537   112300   326090   324173   320777   106363

**********************************************************************************************************************************
RAID1

 

rock@rockpi4:~$ sudo mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sda /dev/sdb
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
mdadm: size set to 117155264K
mdadm: automatically enabling write-intent bitmap on large array
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
rock@rockpi4:~$ cat /proc/mdstat
Personalities : [raid10] [raid6] [raid5] [raid4] [raid1]
md0 : active raid1 sdb[1] sda[0]
      117155264 blocks super 1.2 [2/2] [UU]
      [>....................]  resync =  2.3% (2801408/117155264) finish=8.8min speed=215492K/sec
      bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>
Jul  4 15:20:25 rockpi4 kernel: [ 2324.757953] md: bind<sda>
Jul  4 15:20:25 rockpi4 kernel: [ 2324.759742] md: bind<sdb>
Jul  4 15:20:25 rockpi4 kernel: [ 2324.772561] md: raid1 personality registered for level 1
Jul  4 15:20:25 rockpi4 kernel: [ 2324.783910] md/raid1:md0: not clean -- starting background reconstruction
Jul  4 15:20:25 rockpi4 kernel: [ 2324.784534] md/raid1:md0: active with 2 out of 2 mirrors
Jul  4 15:20:25 rockpi4 kernel: [ 2324.785261] created bitmap (1 pages) for device md0
Jul  4 15:20:25 rockpi4 kernel: [ 2324.787956] md0: bitmap initialized from disk: read 1 pages, set 1788 of 1788 bits
Jul  4 15:20:25 rockpi4 kernel: [ 2324.790798] md0: detected capacity change from 0 to 119966990336
Jul  4 15:20:25 rockpi4 kernel: [ 2324.791556] md: md0 switched to read-write mode.
Jul  4 15:20:25 rockpi4 kernel: [ 2324.794162] md: resync of RAID array md0
Jul  4 15:20:25 rockpi4 kernel: [ 2324.794546] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
Jul  4 15:20:25 rockpi4 kernel: [ 2324.795124] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
Jul  4 15:20:25 rockpi4 kernel: [ 2324.795964] md: using 128k window, over a total of 117155264k.
Jul  4 15:30:14 rockpi4 kernel: [ 2913.737079] md: md0: resync done.
Jul  4 15:30:14 rockpi4 kernel: [ 2913.745998] RAID1 conf printout:
Jul  4 15:30:14 rockpi4 kernel: [ 2913.746016]  --- wd:2 rd:2
Jul  4 15:30:14 rockpi4 kernel: [ 2913.746027]  disk 0, wo:0, o:1, dev:sda
Jul  4 15:30:14 rockpi4 kernel: [ 2913.746035]  disk 1, wo:0, o:1, dev:sdb
Jul  4 15:31:19 rockpi4 kernel: [ 2978.675630] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
        Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    24759    31559    39765    41196    25476    30710
          102400      16    62662    73245   124756   125744    62209    72778
          102400     512   139397   160038   260433   261606   218154   147652
          102400    1024   165815   155189   258119   261744   232643   164702
          102400   16384   172905   186702   318211   322998   321997   170680

******************************************************************************8

RAID0

 

rock@rockpi4:~$ sudo mdadm --create --verbose /dev/md0 --level=0 --raid-devices=4 /dev/sda /dev/sdb  /dev/sdc /dev/sdd
mdadm: chunk size defaults to 512K
mdadm: /dev/sdc appears to be part of a raid array:
       level=raid0 devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: partition table exists on /dev/sdc but will be lost or
       meaningless after creating array
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
rock@rockpi4:~$ cat /proc/mdstat
Personalities : [raid10] [raid6] [raid5] [raid4] [raid1] [raid0]
md0 : active raid0 sdd[3] sdc[2] sdb[1] sda[0]
      468619264 blocks super 1.2 512k chunks

unused devices: <none>
Jul  4 15:38:35 rockpi4 kernel: [ 3415.084442] md: bind<sda>
Jul  4 15:38:35 rockpi4 kernel: [ 3415.085523] md: bind<sdb>
Jul  4 15:38:35 rockpi4 kernel: [ 3415.086511] md: bind<sdc>
Jul  4 15:38:35 rockpi4 kernel: [ 3415.087930] md: bind<sdd>
Jul  4 15:38:35 rockpi4 kernel: [ 3415.101830] md: raid0 personality registered for level 0
Jul  4 15:38:35 rockpi4 kernel: [ 3415.101836]  sdc: sdc1 sdc2
Jul  4 15:38:35 rockpi4 kernel: [ 3415.107953] md/raid0:md0: md_size is 937238528 sectors.
Jul  4 15:38:35 rockpi4 kernel: [ 3415.108427] md: RAID0 configuration for md0 - 1 zone
Jul  4 15:38:35 rockpi4 kernel: [ 3415.108866] md: zone0=[sda/sdb/sdc/sdd]
Jul  4 15:38:35 rockpi4 kernel: [ 3415.109261]       zone-offset=         0KB, device-offset=         0KB, size= 468619264KB
Jul  4 15:38:35 rockpi4 kernel: [ 3415.109973] 
Jul  4 15:38:35 rockpi4 kernel: [ 3415.110235] md0: detected capacity change from 0 to 479866126336
Jul  4 15:38:35 rockpi4 udisksd[572]: Error creating watch for file /sys/devices/virtual/block/md0/md/sync_action: No such file or directory (g-file-error-quark, 4)
Jul  4 15:38:35 rockpi4 udisksd[572]: Error creating watch for file /sys/devices/virtual/block/md0/md/degraded: No such file or directory (g-file-error-quark, 4)
Jul  4 15:41:08 rockpi4 kernel: [ 3568.278677] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)


 

        Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    31874    42784    44859    48796    26191    42465
          102400      16    89104   112188   110570   114486    77652   111816
          102400     512   248787   259180   258800   270097   227197   229707
          102400    1024   309271   324243   293455   293122   268819   286143
          102400   16384   373574   382208   324869   326204   326070   380622

Concurrent single disks
 

        Command line used: iozone -l 4 -u 4 -r 16k -s 512M -F /home/rock/sda/tmp1 /home/rock/sdb/tmp2 /home/rock/sdc/tmp3 /home/rock/sdd/tmp4
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
        Min process = 4
        Max process = 4
        Throughput test with 4 processes
        Each process writes a 524288 kByte file in 16 kByte records

        Children see throughput for  4 initial writers  =  468982.85 kB/sec
        Parent sees throughput for  4 initial writers   =  391562.16 kB/sec
        Min throughput per process                      =  115979.48 kB/sec
        Max throughput per process                      =  118095.79 kB/sec
        Avg throughput per process                      =  117245.71 kB/sec
        Min xfer                                        =  513488.00 kB

        Children see throughput for  4 rewriters        =  448753.70 kB/sec
        Parent sees throughput for  4 rewriters         =  378103.46 kB/sec
        Min throughput per process                      =  108174.91 kB/sec
        Max throughput per process                      =  119841.15 kB/sec
        Avg throughput per process                      =  112188.42 kB/sec
        Min xfer                                        =  472992.00 kB

        Children see throughput for  4 readers          =  319857.60 kB/sec
        Parent sees throughput for  4 readers           =  319587.93 kB/sec
        Min throughput per process                      =   78386.40 kB/sec
        Max throughput per process                      =   81170.33 kB/sec
        Avg throughput per process                      =   79964.40 kB/sec
        Min xfer                                        =  506336.00 kB

        Children see throughput for 4 re-readers        =  331737.53 kB/sec
        Parent sees throughput for 4 re-readers         =  331539.26 kB/sec
        Min throughput per process                      =   74617.11 kB/sec
        Max throughput per process                      =   90278.13 kB/sec
        Avg throughput per process                      =   82934.38 kB/sec
        Min xfer                                        =  433360.00 kB

        Children see throughput for 4 reverse readers   =  769042.86 kB/sec
        Parent sees throughput for 4 reverse readers    =  768023.53 kB/sec
        Min throughput per process                      =   43320.77 kB/sec
        Max throughput per process                      =  262961.66 kB/sec
        Avg throughput per process                      =  192260.72 kB/sec
        Min xfer                                        =   86384.00 kB

        Children see throughput for 4 stride readers    = 1795856.09 kB/sec
        Parent sees throughput for 4 stride readers     = 1781767.61 kB/sec
        Min throughput per process                      =   65569.88 kB/sec
        Max throughput per process                      =  920383.50 kB/sec
        Avg throughput per process                      =  448964.02 kB/sec
        Min xfer                                        =   37360.00 kB

        Children see throughput for 4 random readers    = 1971409.70 kB/sec
        Parent sees throughput for 4 random readers     = 1958188.18 kB/sec
        Min throughput per process                      =   69869.92 kB/sec
        Max throughput per process                      =  861175.75 kB/sec
        Avg throughput per process                      =  492852.43 kB/sec
        Min xfer                                        =   41904.00 kB

        Children see throughput for 4 mixed workload    = 1176863.17 kB/sec
        Parent sees throughput for 4 mixed workload     =  275991.88 kB/sec
        Min throughput per process                      =   98414.23 kB/sec
        Max throughput per process                      =  606498.81 kB/sec
        Avg throughput per process                      =  294215.79 kB/sec
        Min xfer                                        =   84304.00 kB

        Children see throughput for 4 random writers    =  428459.84 kB/sec
        Parent sees throughput for 4 random writers     =  318774.34 kB/sec
        Min throughput per process                      =   96696.56 kB/sec
        Max throughput per process                      =  118440.29 kB/sec
        Avg throughput per process                      =  107114.96 kB/sec
        Min xfer                                        =  428352.00 kB

        Children see throughput for 4 pwrite writers    =  467800.79 kB/sec
        Parent sees throughput for 4 pwrite writers     =  381736.33 kB/sec
        Min throughput per process                      =  111798.68 kB/sec
        Max throughput per process                      =  120814.23 kB/sec
        Avg throughput per process                      =  116950.20 kB/sec
        Min xfer                                        =  485168.00 kB

        Children see throughput for 4 pread readers     =  309714.87 kB/sec
        Parent sees throughput for 4 pread readers      =  309501.91 kB/sec
        Min throughput per process                      =   76447.56 kB/sec
        Max throughput per process                      =   79120.13 kB/sec
        Avg throughput per process                      =   77428.72 kB/sec
        Min xfer                                        =  506592.00 kB

        Children see throughput for  4 fwriters         =  442763.85 kB/sec
        Parent sees throughput for  4 fwriters          =  373418.60 kB/sec
        Min throughput per process                      =  107828.45 kB/sec
        Max throughput per process                      =  114495.70 kB/sec
        Avg throughput per process                      =  110690.96 kB/sec
        Min xfer                                        =  524288.00 kB

        Children see throughput for  4 freaders         =  331765.48 kB/sec
        Parent sees throughput for  4 freaders          =  325459.39 kB/sec
        Min throughput per process                      =   81387.83 kB/sec
        Max throughput per process                      =   86099.32 kB/sec
        Avg throughput per process                      =   82941.37 kB/sec
        Min xfer                                        =  524288.00 kB

single disk sda

 

        Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    36038    45031    52457    52672    27342    44553
          102400      16    93224   115531   124822   114115    79868   115219
          102400     512   249415   223799   267595   273488   227651   258480
          102400    1024   259449   236700   268852   273148   242803   266988
          102400   16384   313281   317096   324922   325600   319687   267843

single disk sdb

 

       Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    33918    45021    52628    52655    27404    44621
          102400      16   100152   106531   127148   115452    76579   113503
          102400     512   251035   259812   272338   273634   227332   225607
          102400    1024   260791   268019   273578   276074   241042   268323
          102400   16384   267448   316877   323467   324679   319983   316710

single disk sdc

 

        Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    36074    44819    52358    52592    23334    44073
          102400      16    92510   114568   127346   126830    72293   112819
          102400     512   220032   260191   271136   274745   225818   258574
          102400    1024   258895   228236   270047   271946   239184   267370
          102400   16384   312151   316425   318919   323689   317570   268308

single disk sdd

 

        Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    36100    44939    52756    52768    27569    42697
          102400      16   100207   111073   127120   118992    76555   105342
          102400     512   248869   259052   271718   272745   227450   258252
          102400    1024   226653   266979   262772   265104   236617   266018
          102400   16384   314211   269062   322937   325634   320150   315470


 


 

Link to comment
Share on other sites

Forgot to set the RockPi4 to pcie2 doh!

 

Also if you are a plonker and forget to edit  `/boot/hw_intfc.conf` from `#intfc:dtoverlay=pcie-gen2` to `intfc:dtoverlay=pcie-gen2` you will be running on pcie-gen1

RAID 10
 

        Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    11719    15447    55220    53720    25421    12773
          102400      16    39410    54840   139482   145128    81258    43792
          102400     512   228002   220126   334104   339660   265930   225507
          102400    1024   244376   243730   451377   462467   397566   258481
          102400   16384   270088   304411   597462   610057   615669   297855



RAID 5
 

        Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4     6133     6251    47505    46013    25046     8190
          102400      16    17103    17134   113272   133606    79753    20420
          102400     512    61418    50852   241860   246467   244030    58031
          102400    1024    79325    73325   363343   359830   361882    83655
          102400   16384   127548   124702   625256   642094   650407   136680



RAID 1
 

        Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    23713    29698    45608    45983    23657    30381
          102400      16    79205    82546   138060   144557    82126    93921
          102400     512   212859   221943   307613   304036   259783   179355
          102400    1024   235985   243783   366101   369935   317354   198861
          102400   16384   289036   290279   410520   398875   399868   295329



RAID 0
 

        Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    33519    47927    52701    51023    26700    46382
          102400      16   105763   132604   138080   155514    87026   135111
          102400     512   276220   320320   311343   294629   267624   335363
          102400    1024   493565   522038   463105   470833   398584   522560
          102400   16384   687516   701200   625733   623531   555318   681535

4 individual disks conurrent


 

        Command line used: iozone -l 4 -u 4 -r 16k -s 512M -F /srv/dev-disk-by-label-sda/tmp1 /srv/dev-disk-by-label-sdb/tmp2 /srv/dev-disk-by-label-sdc/tmp3 /srv/dev-disk-by-label-sdd/tmp4
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
        Min process = 4
        Max process = 4
        Throughput test with 4 processes
        Each process writes a 524288 kByte file in 16 kByte records

        Children see throughput for  4 initial writers  =  884590.91 kB/sec
        Parent sees throughput for  4 initial writers   =  701620.17 kB/sec
        Min throughput per process                      =  195561.27 kB/sec
        Max throughput per process                      =  234457.59 kB/sec
        Avg throughput per process                      =  221147.73 kB/sec
        Min xfer                                        =  437344.00 kB

        Children see throughput for  4 rewriters        =  822771.77 kB/sec
        Parent sees throughput for  4 rewriters         =  701488.29 kB/sec
        Min throughput per process                      =  180381.25 kB/sec
        Max throughput per process                      =  232223.50 kB/sec
        Avg throughput per process                      =  205692.94 kB/sec
        Min xfer                                        =  408720.00 kB

        Children see throughput for  4 readers          =  755252.30 kB/sec
        Parent sees throughput for  4 readers           =  753357.02 kB/sec
        Min throughput per process                      =  169105.11 kB/sec
        Max throughput per process                      =  198976.81 kB/sec
        Avg throughput per process                      =  188813.07 kB/sec
        Min xfer                                        =  445664.00 kB

        Children see throughput for 4 re-readers        =  753492.39 kB/sec
        Parent sees throughput for 4 re-readers         =  750353.64 kB/sec
        Min throughput per process                      =  160626.64 kB/sec
        Max throughput per process                      =  201223.11 kB/sec
        Avg throughput per process                      =  188373.10 kB/sec
        Min xfer                                        =  418528.00 kB

        Children see throughput for 4 reverse readers   =  780261.86 kB/sec
        Parent sees throughput for 4 reverse readers    =  778761.55 kB/sec
        Min throughput per process                      =   58371.02 kB/sec
        Max throughput per process                      =  254657.08 kB/sec
        Avg throughput per process                      =  195065.47 kB/sec
        Min xfer                                        =  120192.00 kB

        Children see throughput for 4 stride readers    =  317923.62 kB/sec
        Parent sees throughput for 4 stride readers     =  316905.36 kB/sec
        Min throughput per process                      =   63171.63 kB/sec
        Max throughput per process                      =   98114.27 kB/sec
        Avg throughput per process                      =   79480.91 kB/sec
        Min xfer                                        =  337600.00 kB

        Children see throughput for 4 random readers    =  798898.78 kB/sec
        Parent sees throughput for 4 random readers     =  794905.95 kB/sec
        Min throughput per process                      =   57059.89 kB/sec
        Max throughput per process                      =  391248.59 kB/sec
        Avg throughput per process                      =  199724.70 kB/sec
        Min xfer                                        =   76480.00 kB

        Children see throughput for 4 mixed workload    =  647158.06 kB/sec
        Parent sees throughput for 4 mixed workload     =  491223.65 kB/sec
        Min throughput per process                      =   28319.04 kB/sec
        Max throughput per process                      =  305288.75 kB/sec
        Avg throughput per process                      =  161789.51 kB/sec
        Min xfer                                        =   48720.00 kB

        Children see throughput for 4 random writers    =  734947.98 kB/sec
        Parent sees throughput for 4 random writers     =  544531.66 kB/sec
        Min throughput per process                      =  167241.00 kB/sec
        Max throughput per process                      =  207134.38 kB/sec
        Avg throughput per process                      =  183737.00 kB/sec
        Min xfer                                        =  424704.00 kB

        Children see throughput for 4 pwrite writers    =  879712.72 kB/sec
        Parent sees throughput for 4 pwrite writers     =  686621.58 kB/sec
        Min throughput per process                      =  186624.69 kB/sec
        Max throughput per process                      =  236047.30 kB/sec
        Avg throughput per process                      =  219928.18 kB/sec
        Min xfer                                        =  415856.00 kB

        Children see throughput for 4 pread readers     =  777243.34 kB/sec
        Parent sees throughput for 4 pread readers      =  773302.81 kB/sec
        Min throughput per process                      =  184983.08 kB/sec
        Max throughput per process                      =  203392.77 kB/sec
        Avg throughput per process                      =  194310.84 kB/sec
        Min xfer                                        =  476896.00 kB

        Children see throughput for  4 fwriters         =  820877.50 kB/sec
        Parent sees throughput for  4 fwriters          =  693823.17 kB/sec
        Min throughput per process                      =  194228.28 kB/sec
        Max throughput per process                      =  217311.28 kB/sec
        Avg throughput per process                      =  205219.38 kB/sec
        Min xfer                                        =  524288.00 kB

        Children see throughput for  4 freaders         = 1924029.62 kB/sec
        Parent sees throughput for  4 freaders          = 1071393.99 kB/sec
        Min throughput per process                      =  268087.50 kB/sec
        Max throughput per process                      =  970331.94 kB/sec
        Avg throughput per process                      =  481007.41 kB/sec
        Min xfer                                        =  524288.00 kB

Single disk sda reference

 

        Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    35191    45728    56689    53307    27889    48508
          102400      16   104379   122405   154385   157484    88670   113964
          102400     512   315788   347042   351932   348604   271399   288430
          102400    1024   358399   366194   388893   379453   338470   369888
          102400   16384   353154   443256   425396   422384   410580   444530


 

Link to comment
Share on other sites

When I build a RAID0 (for performance), I partition the drives, so that they have a "lot" of "small" partitions at the low sector count; those partitions will be the partitions that are used most often.

My boot partitions do not count (they're not in RAID anyway), so I might have 3 small partitions for boot and boot-backup, then I'll have a swap partition on each drive. Because swap is auto-software-RAID0 under Linux, I generally do not combine the partitions as RAID, however, if you're using a hardware RAID0, then it will likely pay to do so.

After the swap partition, I have my /var, where all the log files go. It's a small partition as well.

Next partition is my build partition; this is where all the binary files go when I compile my sources. I usually make this 16GB for a 4 or 5 drive RAID0. A source-partition follows, which is twice the size of the build partition.

Depending on what the purpose is, I usually put a git partition, a web partition or a mail partition next.

When I've "spent" 80% of my drive-space, I make an "archive" partition, which is used as a cache for holding all my downloaded .bz2/.gz/.xz/.lz4/.7z files (I have a separate partition for transmission, which is a lower priority partition in the first 80% as it's large).

 

The first part of the harddisk will read the fastest, since the lower sector numbers are the outermost.

You can try this out easily by partitioning your drives into - say 7 partitions each (5%,60%,5%,5%,5%,15%,5%), then write a large zero-file on each of the 5% partitions and check the timing difference. I'm saying zero-file, because if using /dev/random, the random generator will interfere with your readings (it always waits for it on the Mac, where it's very, very, very slow).

 

Also, if your RAID controller is connected via PCIe (which it would be the case with the 9235), then the PCIe bandwidth will limit the performance of the 4 ports. You won't be able to get 24Gbps 'raw'. At most, you'd be able to get 476.8 MB/sec 'raw' (1024-based) on a 5GT/s PCIe2.0 lane. But then PCIe protocols may take 5% of the cake and since SATA is 8b/10b encoded, you're already down to 461MB/sec. SATA also has some protocol overhead, which needs to be subtracted (as far as I recall, it's around 15%, but I'm not 100% sure).

On a dedicated 6G SATA port, you shouldn't be able to get much above 491MB/sec (572.2MB/sec raw) [515000000 Bps, 600000000 Bps raw]. In my post, 'raw' means all the transport layer-, protocol layer- and other command-/control-data that wraps your data in safe packets.

 

Some of my information might not be 100% accurate, but you can safely use the information to not get your hopes up above what's possible. ;)

Link to comment
Share on other sites

Note: Choice of filesystem is of course also important. A hashing CoW can save you a lot of transfer time, so it might be helpful to find a FS that supports this (I use BTRFS, but I've gathered that ZFS is a good choice too).

Link to comment
Share on other sites

Cheers Jens :).

 

The rk3399 is pcie2.1x4 the Marvel 9235 is x2 so 2 lanes and can not find a x4 card in m.2, presume there is the silicon but doesn't seem to be available.

This was just a suck and see as was expecting to max out the rk3399 & Marvel 9235 but prob could cope with a faster SSD array.
I am not going to use SSD but it gives me a datum on bandwidth available and if RAID prob would use RAID 10.
But guess its what you are going to do as many may just want a media store and redundancy might not even be key.

The rk3399 with straight Samba and gain no optimisation is providing > 100MBs ethernet approx 110MBS seems the normal steady max which is prob near best 1gbe will do.

I haven't got a switch that can do lacp, but sort of pointless as generally somewhere there will be a 1gbe bottleneck somewhere, but for curiosity I was wondering about using the USB3.0 for ethernet aggregation. 

There are some tricks with cpu affinity, I could try clocking the rk3399 nearer to 2.ghz/1.6ghz big/little again for curiosity as it is, its more than enough for 1gbe. 

ZFS seems to be getting quite popular might also give it a go, loving OpenMediaVault such an excellent NAS and really a full blown micro server with the plugins it has.

Link to comment
Share on other sites

3 hours ago, Stuart Naylor said:

The rk3399 is pcie2.1x4 the Marvel 9235 is x2 so 2 lanes and can not find a x4 card in m.2, presume there is the silicon but doesn't seem to be available.

I think we're on the same page. :)

 

I'm interested in saturating the I/O-interfaces as much as I can. I use hard drives and not SSD (I've already heard some screams about SSD data destroyed because they got worn out; my WD hard drives have lasted for more than 12 years without failure now, so I'll stick with those).

 

I ended up looking for Mini-PCIe-to-fullsize-PCIe riser cards and found one that might be worth purchasing. I haven't ordered anything yet, but such a card could allow me to add one of the cheap x1 PCIe2.0 cards with four 6G SATA ports. I think the more drives you have, the longer they'll last because load will be spread out on them, and what you (I) really want is to keep that PCIe interface flooded with data. ;)

 

On my Mac, I have a PCIe 2.0 x4 slot and want to find a x4 6G SATA RAID card, but that's also difficult even for full size PCIe. Marvell's 923x chips have only one or two lanes, so they can't spread the I/O out on four lanes. That means it's likely I want to look for a card that have two of those Marvell chips on it.

I found something that came close, but I can't seem to find that bookmark right now (!)

[I might just got for an ATTO or Areca, perhaps a SAS].

 

As for Ethernet, on my EspressoBIN, the max. throughput is 1Gbps, even though there are 3 ports on a 2.5GbE topaz switch. Quite annoying.

-So I figured that since there's a USB3 connector which I use for nothing, I could likely go and add a few Ugreen adapters:

One USB3-to-3xUSB3+1xGbE adapter

Two USB3-to-1xGbE adapters

That would give me 3 extra GbE ports and still room for attaching a keyboard or mouse on the unused USB3.

Now some math...

USB3 gives us max. 375MB/sec (clean) -but only if the hardware is designed right and the software as well.

GbE is 1000000000 raw bits per second = 125000000 raw Bytes per second (no 8b/10b encoding! -This is because we clock 8 bits out on a 125MHz crystal).

125000000 / 1024 / 1024 = 119.2 MB/sec. This is the raw throughput, which we add to the 375MB/sec from the USB, so we'll get around 494MB/sec. We end up with a fairly good balance between one 6G SATA port (491MB/sec) and four GbE ports (494MB/sec).

I won't even consider using the fragile USB3 port for attaching a harddisk, which is why I chose it for GbE. No harm done if the network connection is lost, but if writing to the harddisk while losing the connection, you've likely damaged your data on one or more drive(s).

 

Yes, I do believe Link Aggregation is something that can be done. I have one of the SG108E from TP-Link; it has 8 ports and supports LAG. It's fine as a toy, but if you want LAG on more than 3 ports, then you'll need a switch with 16 ports or more. I'm considering T1700G-28TQ for this; especially because it'll fit nicely with the MacchiatoBIN and have plenty of GbE ports for SBCs.

Edited by Jens Bauer
Changed Mini-PCIe-to-PCIe riser card link to correct product.
Link to comment
Share on other sites

Yeah 2.5gbe would be quite attractive as usb3.0 can use existing copper but finding switches or routers is another thing.
Port aggregation is prob way to go with 1GBe x3. 2.5gb Usb is cheap but nothing else is! :)
https://cpc.farnell.com/pro-signal/psg91497/usb3-0-to-2-5g-ethernet-adapter/dp/CS32425

I sort of lost interest with armsocs but the Panfrost arival and practical 100% mainline of support rekindled my curiosity at least.

The thing I have been trying to find and slightly unsure why the don't exist as the silicon does and its relatively cheap.
PCIe packet switches as the RK3399 has a full blow root complex that feeds a single end point and actually with a packet switch could feed 1x4, 2x2 & 4x1 lanes.
They just don't seem to exist apart from silly money for specialist telecom suppliers.

On the sata cards if they have bios support they likely will not work with uboot and arm.
So its gets quite tricky to find higher bandwidth dumb and simple Sata cards.
 

Link to comment
Share on other sites

I follow you on losing interest in the ARM-SoCs, but I do know the ICs are capable of what I want - I just wonder why noone can figure out how to pull all the benefits out of a CPU. :/

I just looked at eBay and found the 2.5GbE adapter there as well; slightly higher price, but accessible for those who don't have a company. Even though the price is higher, it's still not too bad and the product might be better than using two of Ugreen's.

 

I've been using Port Multipliers (3G SATA JMB321 variants without heatsinks, but it seems they might overheat and die). Recently I found the 6G SATA variants with heatsinks for the same price; 3 pounds lower and I'd order one each week as it'd become duty-free for me then. (If you look for cheaper PMs, make sure you find some with heatsinks on them, as the chinese sellers like to state that they're 6G even though they use JMB321 - "yeh, you can attach a 6G drive to it, but that doesn't make it 6G").

 

BTW: I use 1TB WD-RED 2.5" for my RAID0; there are several reasons:

1: A 4TB 3.5" have several disks per drive, meaning if data are scattered, the arm will move more.

2: They use very little power: 0.2W stand-by and 1.4W read/write.

3: They produce very little heat, so I can easily keep my rack fanless.

4: They're silent.

5: They transfer 144000000 Bps sustained, which means 5 drives can easily saturate a 5 port PM.

6: They need only a 5V supply, which means I can make my own solar powered PSU easily.

 

It seems you're located in the UK. If so, I'd like to recommend kenable when you purchase CAT6A, USB, HDMI and other cables, because I measured Kenable's cables and their resistance is very, very low compared to anything you get from China. The reason is that they use as pure copper as they can. Purchasing from them even saves me money, compared to purchasing from a danish web-store (including shipping from the UK!).

Link to comment
Share on other sites

I am not really a fan of the port multipliers, they are slightly Mickey Mouse and prob fan-out  to too many devices
But they are really cheap, but still to find a glowing review.
If you are going jbod or whatever form of linear aggregation then with slower drives as Sata1 is 150MBs and maybe don't use the 5th port as it don't compute.
That could be a whopping 16 drives from a Marvel 9235 pcie3.0 4 port!

The Marvel 9235 actually came with 4x Sata cable but to keep things low profile I wanted to use the right angle side on-board.
Strangely the ones supplied are the DVD type so point in on each other if you try and strangely enough Kenable did come to the rescue with x4 black 6gbs good quality and prob still cheapest.

I am really digging the the RockPi4 because of its form factor, but also the implementation of USB-C PD as sod those little wall chargers  I have a 60watt 12v CCTV type 5.5mm barrel connector and a USB-C adapter.
There is an absolute plethora of 5.5mm barrel connectors and splutters and they are a much better power connector but also it gives me a 12 rail.
I have one of those 5 to 4 pin molex daisy chain sata power cables you often see and the 4 pin molex is snipped off as it uses a 5.5mm barrel with terminal for the 2 12v wires and there is a tiny fixed voltage 3.0 amp buck for the 5V.
I really like that arrangement as it takes no GPIO and also power draw is isolated from the board as separate rails.

There are a lot of RK3399 alternatives that prob all benefit from that Google / Rockchip partnership, that as a package I have started to quite appreciate. 
I didn't at first had the Pine Rockpro64 after a long trial of Pi alternatives when it was an early arrival I had a tantrum about custom images and kernels where nothing works and got rid.
I have been wishing for a long while being English that Raspberry would up the ante and set a new precident for the application SoC but with the arrival of the Pi4 I have given up on that ever happening.

Because they are so cheap 3.5" 5 bay drive cage I can fit x4 3.5" and also 3.5" 1tb are much easier to source 2nd user at approx £10-15 +p&p.

 

Link to comment
Share on other sites

9 hours ago, Stuart Naylor said:

I am not really a fan of the port multipliers, they are slightly Mickey Mouse and prob fan-out  to too many devices

Agree completely, certainly Mickey Mouse, but then again I use them solely because I want to saturate the native 6G SATA when using sustained transfers. I know the JMB575 can only reach 500 "MB" per second (I don't know if those 500 "MB" are 1000-based or 1024-based), but it comes close enough for now and I have a fairly fast NAS+server, which didn't cost a huge amount of money.

 

... reading the rest of your post, I just went "agree", "agree", "agree", ... :)

-Definitely interesting, those cages. I'll likely purchase some of these, because they're only around £45 each - then stack 10 and mount on its side as one 4U block. A quick search found them for £47 on amazon and £43 on alternate, they're also available on eBay.

Until then, I have a stainless steel food tray (2nd hand store: £1), which I plan on cutting rectangular using an angle-grinder. Drill two rows of holes spaced 20mm and mount up to 10 drives in it.

 

-I'm looking forward to JMS591, which is not yet released. It's a 6G SATA-to-SATA RAID-controller (5 devices), which does RAID0, RAID1, RAID5 and RAID10. I'm just shooting for RAID0 for now ("The riskier the road, the greater the profit.", RoA 62 - but one also have to bear RoA 48 in mind: "The bigger the smile, the sharper the knife.").

 

In the beginning I used a lot of these 3-way splitters, but as I ran out, I decided to just cut the Molex-plugs off of these cables and add my own connectors on only the 5V and GND, leaving the 12V unconnected.

-And yes, I also agree on the drives should not be powered by the board itself. Sadly, many people think this can't be done or is risky or dangerous, but in fact it's much worse not to do so.

 

I was so close to purchasing the RockPro 64, but then the Raspberry Pi 4 came and I picked a few of those instead (because I want the four Cortex-A72). I'm still thinking about the RockPro 64 for a NAS; it's a pretty good board, just sad that I have to pay custom duties, which makes two of those cost the same as a MacchiatoBIN double-shot. The MacchiatoBIN has much higher priority for me, but still takes time to be able to afford.

Link to comment
Share on other sites

I use these Jens as perfect for a Raid housing. https://www.ebay.co.uk/itm/1Pc-4pin-ide-to-5-port-15pin-sata-power-cable-18AWG-wire-for-hard-drive-LDOQ/264339041268

PS RockPro64 with 2xa72 4xa53 is still about 20-25% faster than the Pi4

Out of curiosity I just purchased 2x JMICRON JMB575 sata port multipliers.
I just suddenly thought never seen a review where they are used to split a single Sata3 to 2x Sata2, but also will try Sata3 to 4x Sata1 the 5th port always confuses me.

Link to comment
Share on other sites

Those power cables are much better than the 4-pin Molex!

BTW, you probably got me hooked on those cages. I found a slightly lower price here.

The sad thing about purchasing a RockPro64, is that I can get three Raspberry Pi 4B for the same price I would pay for the RockPro64. -But at some point, I will get a RockPro64 - I just wish it had a couple of native 6G SATA ports, but maybe I'll be able to split the PCIe into four x1 ports - do you know if that will actually work ?

Uhm, wait a sec, that USB-C; does it work as a full USB3 in addition to the second USB3 ?

-If so, then there are two USB3 ports, which could likely give me a total of 6 GbE in addition to the built-in GbE (resulting in up to 834 MB/sec throughput - let's say 800MB/sec). The PCIe2 x4 could then hold a SATA or SAS controller giving a max. raw throughput of 1.9GB/sec.

This beats the EspressoBIN by several lengths (besides, RockPro64 is a much better design).

Link to comment
Share on other sites

Nope without a packet switch a single endpoint steals all lanes even if not used on that root complex.

Yeah those 5bay can be got for about £15 and are pretty much all you need.

Dunno about your prices though I just got my Pi4 from piminori in the UK £49.65 GBP deliverd for the 2gb.
The rockpro64 2gb is $59.99 but you will have to add delivery and tax but doubt you will get 2 pis for that?
Dunno with the typeC https://wiki.pine64.org/index.php/ROCKPro64_Main_Page#Expansion_Ports you will prob have to research.
Didn't really spend much time with mine as I say, sort of lost interest at that time.
I think all the rk3399 boards have 2x usb 3.0 2x usb 2.0 with also pcie 2.1 x4 on most.

Link to comment
Share on other sites

27 minutes ago, Stuart Naylor said:

Nope without a packet switch a single endpoint steals all lanes even if not used on that root complex.

Bummer. I've seen that Asmedia makes PCIe switches, but I don't know when someone will make a real affordable splitter (Linus Tech Tips did show one once, but it was expensive).

 

The Raspberry Pi 4 B (1GB) cost £35 each, including shipping from the UK.

The RockPro64 (4GB is what I want, so comparison isn't really fair) cost $80 + $12 shipping = $92 (around £72).

Now multiply by 1.25 and [update: add DKK 160 (£19)], now you'll get around £109.

-In my case, I had further discount on the RPi, so I paid only £33.25 each.

 

In my case, I need both some storage-capable devices and some compute-power (build-farms).

The Raspi 3B+ and 4 can netboot without any firmware modfication, which makes them quite neat for build-farms. I can make a power-controller, so they can be turned on when needed too.

 

I just realized that I got way off-topic for this thread (sorry to everyone reading it; I hope that you'll benefit from our conversation anyway).

Link to comment
Share on other sites

Lols actually think Jens its good as these are all things to think about, some might be eyeing those benches for maybe a Nas that needs to be more secure than the Pi4.
The Pi is the defacto  datum of the SBC world and its good to compare.

 

I am extremely dissapointed with the Pi4, its not the gold standard the media makes it out to be.
What really dissapoints me is that its so close that with a bit more holistic design it could of been truly awesome and killed of the rk3399.
The rk3399 and rockchips application line with the oncoming rk3588 is starting to look extremely strong.
Seems to me Raspberry have totally failed on their economies of sale.
If Raspberry had designed more and not used a set-top box failure and fudged a usb3.0 host onto a pice 2.0 bus we might of seen M.2 on a Pi.
If that had happened then we would of probably seen a whole plethora of pcie peripherals.
Its so weird of Raspberry to bring out in the same market space as existing products and constrain what they offer the community with relatively small incremental jumps.
They now have a ton of product all very similar, at similar price all doing similar and a lot of them, its sort of nuts.
But hey just my take on Raspberry and I have swallowed to much of there snakeoil now to continue to be a fan. 

The Pcie packet switches are really frustrating as I did some research and the silicon is cheap but when it becomes product it is rare and extremely expensive.
I am not really sure why its not more available and end prices are so high the chip cost is under $10.
Broadcom are the biggest supplier! but Diodes make quite well priced silicon pcie packet switches.

 

Link to comment
Share on other sites

5 minutes ago, Stuart Naylor said:

rk3588

I've had my eyes on that one too. Unfortunately, it'll probably take a couple of years before we have it in a SBC.

 

The Raspberry Pi developers, they target mainstream. That means "VHS is better than Betamax", "Windows is an operating system and it's better than Unix", etc, etc.

-They have the customers now and just like cellphones, people want to upgrade to the newest top-of-the-pop.

[Had a few typos and thought it a shame wasting them: pos, poop].

-On the bright side, they still have four Cortex-A72 cores running at 1.5 GHz; so they should be able to do some work - and the GbE is also (at last) full speed; thus compared to EspressoBIN, it'd be quicker to use the RPi for building, except from that it does not have direct access to the harddisk array.

 

I'd like to see a SBC with the BCM5871X, even though it's only Cortex-A57 and not Cortex-A76.

It's very versatile, so perfect for a server rack - but it's likely also much more expensive than those CPUs we're used to.

 

4 minutes ago, Stuart Naylor said:

I am not really sure why [the PCIe packet switch] is not more available and end prices are so high the chip cost is under $10

If we wait a little, maybe some chinese developers will mass produce a cheap PCB that can do it.

Just like with the JMB321 and 575 port multipliers.

 

Just found another interesting item; I don't know if I'll purchase those, but they seem to be real nice heatsinks for 2.5" drives. I'll likely go for 3 cages you mentioned earlier, then stuff with 15 Chieftec ATM-13225-RD in RAID1 configuration so I can make a RAID10 using BTRFS for the RAID0.

Link to comment
Share on other sites

I dunno Jens maybe there is more to it, but to be honest it is further than my knowledge of pcie to why not?
Pcie2.0 has beeen replaced by pcie3.0 so 2.0 devices have turned up in as relatively cheap automotive product use.

Probably has more scope than the port multipliers but it surprises me that a SBC supplier hasn't provided this as an option. 
Maybe one will post info even if it is a 'why not'.

There is one here but pciex1 to 3x pciex1 but only £16
https://www.ebay.co.uk/itm/3-in-1-PCI-Express-PCI-E-1X-slots-Riser-Card-Expansion-Adapter-PCI-E-Port/254122157852

Which seems to be a https://www.diodes.com/assets/Datasheets/PI7C9X2G404SL.pdf

I am presuming those miner cards over usb are just a single lane?

Link to comment
Share on other sites

On 7/6/2019 at 4:44 PM, Stuart Naylor said:

I am presuming those miner cards over usb are just a single lane?

Most of those I've seen (especially on eBay) are just single-lane (and have nothing to do with USB, other than using a USB-cable - so noone reading this should even attempt to connect a USB device to such a card - or you'll probably fry something - I wouldn't be surprised if the chinese also "invented" a "240VAC-to-USB converter adapter"...)

 

I have not tried any of the miner-cards yet; If I do it, I'll likely get one of those that fans out to 3 or 4 full-size PCIe slots, but converting from Mini-PCIe to multiple full-size PCIe slots might require more Mickey Mousing.

-I've actually been considering it for a while. =)

My idea would be to take the earlier mentioned Mini-PCIe-to-fullsize-PCIe adapter card and then connect this single lane to a card like the one you mentioned.

 

-Anyway, I think that some PCIe capable CPUs may allow you to pull out the lanes individually (this is only a guess). This guess is based upon that the CPU allows you to change the configuration between 1x4, 2x2 or 4x1.

-If that's the case, I think you might not need the PCIe switch, but if the configuration is fixed as a x4, then you'd need the switch. As far as I understand, the RK3399 "Pro" has a fixed configuration, so a switch would be necessary.

On the BCM5871X, it's possible to configure the PCIe to "whatever you wish", which is why I think a switch would not be necessary there.

 

On eBay, so far I've only found PCIe x1-to-two-x1 or PCIe x1-to-four-x1. Those cards are both under the duty-free limit for Denmark.

There's another x1-to-four-x1, but still kinda useless for us.

I'm pretty sure these use PCIe switches to make this stuff work, I just wish they'd make a PCIe x4 variant.

-Most miner boards only have a bunch of x1 slots, which is why all those only support x1.

I have no idea if these x1 boards will be useful on a SBC. Maybe if you're only using one peripheral at a time.

 

I'll try and ask ComputerSalg if they know of a x4-to-four-x1 card - they'd likely know if such a card exists.

Link to comment
Share on other sites

They do those miner cards full size pcie to full size pcie but just a single lane over that usb cable which is prob as good as any for that purpose of a single lane.

But yeah the fan out is via a pcie packet switch but x1 to 3x x1 :)

PCIex4 to 2x x2 is my grail quest but you can get packet switches with 4 endpoints that can take any combination of x2, x1 and when the lanes are used they are used.
So x2 & 2x x1 or 2x x2 or 4x x1 are all valid and would give a hell of a lot of modular flexibility.

Link to comment
Share on other sites

28 minutes ago, Stuart Naylor said:

you can get packet switches with 4 endpoints that can take any combination of x2, x1

That's very nice.

Marvell's 923x is definitely what I'd go for myself, since it has built-in RAID support.

-But I might consider moving to JMS591, because it supports FIS-based port multipliers, so one could go insane if needed...

For me, the balance would probably be one x1 for network (4Gbps + 1 SBC-built-in GbE) and 3 for SATA. Additional GbE could still be attached via USB3 to get an additional max. 375MB/sec. I've also been thinking about using a 10GbE card, but two lanes are not much for 6G SATA.

Half a year ago, I discarded the idea about purchasing a miner-motherboard for NAS purposes. The total power consumption is way too high for me (that's the real reason for me to use SBCs). A miner-motherboard setup could work just fine for those who do not care about the electricity bill, but then again why not just buy a SAN. ;)

Link to comment
Share on other sites

Just looked outside eBay; there seem to exist some, but I have not found any prices yet; I expect prices of those to be high, though.

 

Update: It seems that DeLock already branded the IOI products, still no prices.

 

I think this one might be a x4-to-two-x2 variant that you could use.

Link to comment
Share on other sites

"This product is not available anymore." which is a shame.

I was just coming to say I can find 1x x1 to 4x x1
https://www.ebay.co.uk/itm/New-4-Ports-PCIe-Riser-Adapter-Board-PCI-E-1x-to-4-USB-3-0-PCI-E-Rabbet-GPU-A1J6/283442792757

Look at all those usb leads hey! 
https://www.ebay.co.uk/itm/New-4-Ports-PCIe-Riser-Adapter-Board-PCI-E-1x-to-4-USB-3-0-PCI-E-Rabbet-GPU-A1J6/283442792757

£12.99

That is more my price range :)

Link to comment
Share on other sites

The DeLock products seem to be EOL, but on IOI's web-site it looks like they're still being manufactured.

(The "Show/Hide More Phased out Products" button at the bottom of their page is a little misleading; it just lets you to view EOL products too).

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines