Jump to content

Stuart Naylor

Members
  • Posts

    83
  • Joined

  • Last visited

Reputation Activity

  1. Like
    Stuart Naylor got a reaction from lanefu in New ROCK 3 Model A co-developed by RADXA and Rockchip   
    I still sort of think Radxa are positioning this board incorrectly as for me its the router companion to the RK3566 which is a great edge board.
    They seem to be positioning it as a series of multipurpose SBC's whilst ignoring its extremely strong selling points.
    It could be an extremely low cost native 3x SATA board that the PCIe3 could populate another 1Gbe m.2 if you wished.
    But as it is 3x sata could be perfect for a mini NAS on a single 1Gbe at extremely low cost.

    The other amazing thing is that the RK3568 has a 5Gbe MAC that would need a 5Gbe PHY chip to be added and then a M.2 sata on PCIe3x2
    That at the cost would be amazing and passes a hurdle that many current SBC NAS are totally bottlenecked by ethernet bandwidth.

    The other thing that looks really good on paper is the G52 but in reality its a cut down MP1 that is quite different from standard the the MESA crew gave it a naming of G52L 'little'

    I have been looking and the Radxa Zero... really cool, that they might also do a S922x board could be cool also and the RockPi5 RK3588 a lot of cool.
    But when I look at the format, cost and offerings with the Rock3 range I am sort of confused as what I see that could be of huge value has been overlooked and layouts provide just another series of general purpose SBC where they don't really excel or have a unique selling point of anything... ?

    Each to their own but prob not likely to be interested in one.
     
  2. Like
    Stuart Naylor got a reaction from lanefu in Really impressed with the latest Armbian   
    Just to say really impressed and wow you guys have done a lot with the documentation since I last looked.

    I was never a big fan and its been a while, but wow I have to say its really great and thanks for all your efforts.

    No script bloat and all seems really logical and much work most of been done.
     
  3. Like
    Stuart Naylor got a reaction from Werner in Really impressed with the latest Armbian   
    Just to say really impressed and wow you guys have done a lot with the documentation since I last looked.

    I was never a big fan and its been a while, but wow I have to say its really great and thanks for all your efforts.

    No script bloat and all seems really logical and much work most of been done.
     
  4. Like
    Stuart Naylor got a reaction from Igor in Really impressed with the latest Armbian   
    Just to say really impressed and wow you guys have done a lot with the documentation since I last looked.

    I was never a big fan and its been a while, but wow I have to say its really great and thanks for all your efforts.

    No script bloat and all seems really logical and much work most of been done.
     
  5. Like
    Stuart Naylor got a reaction from Igor in 2.5gbe USB rtl   
    Net rockpi4 client 100MBs sort of 1gbe speed?
     
    root@rockpi:~# dd if=/dev/zero bs=1024k count=1024 | nc -v 192.168.1.9 2222 Connection to 192.168.1.9 2222 port [tcp/*] succeeded! 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 9.98256 s, 108 MB/s x86 client rockpi server
    stuart@stuart-pc:~$ dd if=/dev/zero bs=1024K count=1024 | nc -v 192.168.1.12 2222 Connection to 192.168.1.12 2222 port [tcp/*] succeeded! 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 23.158 s, 46.4 MB/s Somehow auto negiotated to 1gbe me thinks and will have to check nmcli
  6. Like
    Stuart Naylor got a reaction from Jens Bauer in Software RAID testing Rockpi4 Marvell 4 port sata   
    They do those miner cards full size pcie to full size pcie but just a single lane over that usb cable which is prob as good as any for that purpose of a single lane.

    But yeah the fan out is via a pcie packet switch but x1 to 3x x1

    PCIex4 to 2x x2 is my grail quest but you can get packet switches with 4 endpoints that can take any combination of x2, x1 and when the lanes are used they are used.
    So x2 & 2x x1 or 2x x2 or 4x x1 are all valid and would give a hell of a lot of modular flexibility.
  7. Like
    Stuart Naylor got a reaction from Jens Bauer in Software RAID testing Rockpi4 Marvell 4 port sata   
    Nope without a packet switch a single endpoint steals all lanes even if not used on that root complex.

    Yeah those 5bay can be got for about £15 and are pretty much all you need.

    Dunno about your prices though I just got my Pi4 from piminori in the UK £49.65 GBP deliverd for the 2gb.
    The rockpro64 2gb is $59.99 but you will have to add delivery and tax but doubt you will get 2 pis for that?
    Dunno with the typeC https://wiki.pine64.org/index.php/ROCKPro64_Main_Page#Expansion_Ports you will prob have to research.
    Didn't really spend much time with mine as I say, sort of lost interest at that time.
    I think all the rk3399 boards have 2x usb 3.0 2x usb 2.0 with also pcie 2.1 x4 on most.
  8. Like
    Stuart Naylor got a reaction from Jens Bauer in Software RAID testing Rockpi4 Marvell 4 port sata   
    I am not really a fan of the port multipliers, they are slightly Mickey Mouse and prob fan-out  to too many devices
    But they are really cheap, but still to find a glowing review.
    If you are going jbod or whatever form of linear aggregation then with slower drives as Sata1 is 150MBs and maybe don't use the 5th port as it don't compute.
    That could be a whopping 16 drives from a Marvel 9235 pcie3.0 4 port!

    The Marvel 9235 actually came with 4x Sata cable but to keep things low profile I wanted to use the right angle side on-board.
    Strangely the ones supplied are the DVD type so point in on each other if you try and strangely enough Kenable did come to the rescue with x4 black 6gbs good quality and prob still cheapest.

    I am really digging the the RockPi4 because of its form factor, but also the implementation of USB-C PD as sod those little wall chargers  I have a 60watt 12v CCTV type 5.5mm barrel connector and a USB-C adapter.
    There is an absolute plethora of 5.5mm barrel connectors and splutters and they are a much better power connector but also it gives me a 12 rail.
    I have one of those 5 to 4 pin molex daisy chain sata power cables you often see and the 4 pin molex is snipped off as it uses a 5.5mm barrel with terminal for the 2 12v wires and there is a tiny fixed voltage 3.0 amp buck for the 5V.
    I really like that arrangement as it takes no GPIO and also power draw is isolated from the board as separate rails.

    There are a lot of RK3399 alternatives that prob all benefit from that Google / Rockchip partnership, that as a package I have started to quite appreciate. 
    I didn't at first had the Pine Rockpro64 after a long trial of Pi alternatives when it was an early arrival I had a tantrum about custom images and kernels where nothing works and got rid.
    I have been wishing for a long while being English that Raspberry would up the ante and set a new precident for the application SoC but with the arrival of the Pi4 I have given up on that ever happening.

    Because they are so cheap 3.5" 5 bay drive cage I can fit x4 3.5" and also 3.5" 1tb are much easier to source 2nd user at approx £10-15 +p&p.

     
  9. Like
    Stuart Naylor got a reaction from Jens Bauer in Software RAID testing Rockpi4 Marvell 4 port sata   
    Cheers Jens .
     
    The rk3399 is pcie2.1x4 the Marvel 9235 is x2 so 2 lanes and can not find a x4 card in m.2, presume there is the silicon but doesn't seem to be available.

    This was just a suck and see as was expecting to max out the rk3399 & Marvel 9235 but prob could cope with a faster SSD array.
    I am not going to use SSD but it gives me a datum on bandwidth available and if RAID prob would use RAID 10.
    But guess its what you are going to do as many may just want a media store and redundancy might not even be key.

    The rk3399 with straight Samba and gain no optimisation is providing > 100MBs ethernet approx 110MBS seems the normal steady max which is prob near best 1gbe will do.

    I haven't got a switch that can do lacp, but sort of pointless as generally somewhere there will be a 1gbe bottleneck somewhere, but for curiosity I was wondering about using the USB3.0 for ethernet aggregation. 

    There are some tricks with cpu affinity, I could try clocking the rk3399 nearer to 2.ghz/1.6ghz big/little again for curiosity as it is, its more than enough for 1gbe. 

    ZFS seems to be getting quite popular might also give it a go, loving OpenMediaVault such an excellent NAS and really a full blown micro server with the plugins it has.
  10. Like
    Stuart Naylor got a reaction from Jens Bauer in Software RAID testing Rockpi4 Marvell 4 port sata   
    Hi I am just doing some tests on a rockpi4b-2gb with a marvell 9235 sata controller and 4x Integral p5 120gb ssds.

    Purely out of interest as was expecting the card to bottleneck, but generally thought push it to the max and see how things go.

    I am just running on the default Radxa   Debian-stretch-4.4 using mdadm for a start.

    Looking for benchmark mark tips & tricks and what I should be outputting, so we can have a look for curiosity sake.

    Currently syncing a Raid10
     
    rock@rockpi4:~$ cat /proc/mdstat Personalities : [raid0] [raid10] md0 : active raid10 sdd[3] sdc[2] sdb[1] sda[0]       234309632 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]       [===========>.........]  resync = 55.4% (129931520/234309632) finish=8.5min speed=202624K/sec       bitmap: 2/2 pages [8KB], 65536KB chunk  
    Jul 4 13:46:45 rockpi4 kernel: [ 75.172062] md0: detected capacity change from 479866126336 to 0 Jul 4 13:46:45 rockpi4 kernel: [ 75.172628] md: md0 stopped. Jul 4 13:46:45 rockpi4 kernel: [ 75.173397] md: unbind<sda> Jul 4 13:46:45 rockpi4 kernel: [ 75.190852] md: export_rdev(sda) Jul 4 13:46:45 rockpi4 kernel: [ 75.191282] md: unbind<sdd> Jul 4 13:46:45 rockpi4 kernel: [ 75.206849] md: export_rdev(sdd) Jul 4 13:46:45 rockpi4 kernel: [ 75.207325] md: unbind<sdb> Jul 4 13:46:45 rockpi4 udisksd[565]: Unable to resolve /sys/devices/virtual/block/md0/md/dev-sdb/block symlink Jul 4 13:46:45 rockpi4 kernel: [ 75.239056] md: export_rdev(sdb) Jul 4 13:46:45 rockpi4 kernel: [ 75.239439] md: unbind<sdc> Jul 4 13:46:45 rockpi4 kernel: [ 75.254837] md: export_rdev(sdc) Jul 4 13:47:12 rockpi4 kernel: [ 102.258308] sdc: sdc1 sdc2 Jul 4 13:47:12 rockpi4 kernel: [ 102.288150] sdc: sdc1 sdc2 Jul 4 13:48:09 rockpi4 kernel: [ 159.300017] md: bind<sda> Jul 4 13:48:09 rockpi4 kernel: [ 159.308923] md: bind<sdb> Jul 4 13:48:09 rockpi4 kernel: [ 159.319055] md: bind<sdc> Jul 4 13:48:09 rockpi4 kernel: [ 159.320188] md: bind<sdd> Jul 4 13:48:09 rockpi4 kernel: [ 159.326830] md/raid0:md0: md_size is 937238528 sectors. Jul 4 13:48:09 rockpi4 kernel: [ 159.327314] md: RAID0 configuration for md0 - 1 zone Jul 4 13:48:09 rockpi4 kernel: [ 159.327759] md: zone0=[sda/sdb/sdc/sdd] Jul 4 13:48:09 rockpi4 kernel: [ 159.328165] zone-offset= 0KB, device-offset= 0KB, size= 468619264KB Jul 4 13:48:09 rockpi4 kernel: [ 159.328937] sdc: sdc1 sdc2 Jul 4 13:48:09 rockpi4 kernel: [ 159.329369] Jul 4 13:48:09 rockpi4 kernel: [ 159.330145] md0: detected capacity change from 0 to 479866126336 Jul 4 13:48:09 rockpi4 udisksd[565]: Error creating watch for file /sys/devices/virtual/block/md0/md/sync_action: No such file or directory (g-file-error-quark, 4) Jul 4 13:48:09 rockpi4 udisksd[565]: Error creating watch for file /sys/devices/virtual/block/md0/md/degraded: No such file or directory (g-file-error-quark, 4) Jul 4 13:49:40 rockpi4 kernel: [ 250.355809] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null) Jul 4 13:55:31 rockpi4 kernel: [ 601.335494] panel disable Jul 4 14:02:26 rockpi4 anacron[1047]: Anacron 2.3 started on 2019-07-04 Jul 4 14:02:26 rockpi4 anacron[1047]: Normal exit (0 jobs run) Jul 4 14:02:59 rockpi4 kernel: [ 1049.309314] md0: detected capacity change from 479866126336 to 0 Jul 4 14:02:59 rockpi4 kernel: [ 1049.309886] md: md0 stopped. Jul 4 14:02:59 rockpi4 kernel: [ 1049.310176] md: unbind<sdd> Jul 4 14:02:59 rockpi4 kernel: [ 1049.327147] md: export_rdev(sdd) Jul 4 14:02:59 rockpi4 kernel: [ 1049.327821] md: unbind<sdc> Jul 4 14:02:59 rockpi4 kernel: [ 1049.350959] md: export_rdev(sdc) Jul 4 14:02:59 rockpi4 kernel: [ 1049.351512] md: unbind<sdb> Jul 4 14:02:59 rockpi4 udisksd[565]: Unable to resolve /sys/devices/virtual/block/md0/md/dev-sdb/block symlink Jul 4 14:02:59 rockpi4 kernel: [ 1049.366971] md: export_rdev(sdb) Jul 4 14:02:59 rockpi4 kernel: [ 1049.367513] md: unbind<sda> Jul 4 14:02:59 rockpi4 kernel: [ 1049.383124] md: export_rdev(sda) Jul 4 14:03:21 rockpi4 kernel: [ 1071.066678] sdc: sdc1 sdc2 Jul 4 14:03:21 rockpi4 kernel: [ 1071.092394] sdc: sdc1 sdc2 Jul 4 14:05:23 rockpi4 kernel: [ 1193.551804] md: bind<sda> Jul 4 14:05:23 rockpi4 kernel: [ 1193.552267] sdc: sdc1 sdc2 Jul 4 14:05:23 rockpi4 kernel: [ 1193.552547] md: bind<sdb> Jul 4 14:05:23 rockpi4 kernel: [ 1193.553780] md: bind<sdc> Jul 4 14:05:23 rockpi4 kernel: [ 1193.554266] md: bind<sdd> Jul 4 14:05:23 rockpi4 kernel: [ 1193.570556] md: raid10 personality registered for level 10 Jul 4 14:05:23 rockpi4 kernel: [ 1193.573138] md/raid10:md0: not clean -- starting background reconstruction Jul 4 14:05:23 rockpi4 kernel: [ 1193.573765] md/raid10:md0: active with 4 out of 4 devices Jul 4 14:05:23 rockpi4 kernel: [ 1193.575635] created bitmap (2 pages) for device md0 Jul 4 14:05:23 rockpi4 kernel: [ 1193.578102] md0: bitmap initialized from disk: read 1 pages, set 3576 of 3576 bits Jul 4 14:05:23 rockpi4 kernel: [ 1193.581797] md0: detected capacity change from 0 to 239933063168 Jul 4 14:05:23 rockpi4 kernel: [ 1193.583297] md: md0 switched to read-write mode. Jul 4 14:05:23 rockpi4 kernel: [ 1193.588652] md: resync of RAID array md0 Jul 4 14:05:23 rockpi4 kernel: [ 1193.589019] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. Jul 4 14:05:23 rockpi4 kernel: [ 1193.589541] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync. Jul 4 14:05:23 rockpi4 kernel: [ 1193.590381] md: using 128k window, over a total of 234309632k. Jul 4 14:25:02 rockpi4 kernel: [ 2372.292473] md: md0: resync done. Jul 4 14:25:02 rockpi4 kernel: [ 2372.452970] RAID10 conf printout: Jul 4 14:25:02 rockpi4 kernel: [ 2372.452989] --- wd:4 rd:4 Jul 4 14:25:02 rockpi4 kernel: [ 2372.452998] disk 0, wo:0, o:1, dev:sda Jul 4 14:25:02 rockpi4 kernel: [ 2372.453005] disk 1, wo:0, o:1, dev:sdb Jul 4 14:25:02 rockpi4 kernel: [ 2372.453012] disk 2, wo:0, o:1, dev:sdc Jul 4 14:25:02 rockpi4 kernel: [ 2372.453019] disk 3, wo:0, o:1, dev:sdd Jul 4 14:30:45 rockpi4 kernel: [ 2715.470782] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)  
    Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2 Output is in kBytes/sec Time Resolution = 0.000001 seconds. Processor cache size set to 1024 kBytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. random random bkwd record stride kB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread 102400 4 20355 28533 43513 44568 22556 28612 102400 16 60835 71891 111520 107540 66074 71640 102400 512 149988 129385 253123 263113 211684 131649 102400 1024 161360 164943 274007 275765 253893 165764 102400 16384 181646 182851 338294 347395 342601 176768
    *************************************************************************************************
    RAID5
     
    rock@rockpi4:~$ sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd mdadm: layout defaults to left-symmetric mdadm: layout defaults to left-symmetric mdadm: chunk size defaults to 512K mdadm: /dev/sdc appears to be part of a raid array:        level=raid0 devices=0 ctime=Thu Jan  1 00:00:00 1970 mdadm: partition table exists on /dev/sdc but will be lost or        meaningless after creating array mdadm: size set to 117154816K mdadm: automatically enabling write-intent bitmap on large array Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. rock@rockpi4:~$ cat /proc/mdstat Personalities : [raid10] [raid6] [raid5] [raid4] md0 : active raid5 sdd[4] sdc[2] sdb[1] sda[0]       351464448 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]       [>....................]  recovery =  1.6% (1898560/117154816) finish=19.2min speed=99924K/sec       bitmap: 0/1 pages [0KB], 65536KB chunk  
    Jul  4 14:49:52 rockpi4 kernel: [  491.913061] md: bind<sda> Jul  4 14:49:52 rockpi4 kernel: [  491.913784] md: bind<sdb> Jul  4 14:49:52 rockpi4 kernel: [  491.914381] md: bind<sdc> Jul  4 14:49:52 rockpi4 kernel: [  491.914971] md: bind<sdd> Jul  4 14:49:52 rockpi4 kernel: [  491.920396]  sdc: sdc1 sdc2 Jul  4 14:49:52 rockpi4 kernel: [  491.929530] async_tx: api initialized (async) Jul  4 14:49:52 rockpi4 kernel: [  491.952339] md: raid6 personality registered for level 6 Jul  4 14:49:52 rockpi4 kernel: [  491.952833] md: raid5 personality registered for level 5 Jul  4 14:49:52 rockpi4 kernel: [  491.953316] md: raid4 personality registered for level 4 Jul  4 14:49:52 rockpi4 kernel: [  491.959926] md/raid:md0: device sdc operational as raid disk 2 Jul  4 14:49:52 rockpi4 kernel: [  491.960484] md/raid:md0: device sdb operational as raid disk 1 Jul  4 14:49:52 rockpi4 kernel: [  491.961025] md/raid:md0: device sda operational as raid disk 0 Jul  4 14:49:52 rockpi4 kernel: [  491.962943] md/raid:md0: allocated 4384kB Jul  4 14:49:52 rockpi4 kernel: [  491.964488] md/raid:md0: raid level 5 active with 3 out of 4 devices, algorithm 2 Jul  4 14:49:52 rockpi4 kernel: [  491.965161] RAID conf printout: Jul  4 14:49:52 rockpi4 kernel: [  491.965169]  --- level:5 rd:4 wd:3 Jul  4 14:49:52 rockpi4 kernel: [  491.965177]  disk 0, o:1, dev:sda Jul  4 14:49:52 rockpi4 kernel: [  491.965183]  disk 1, o:1, dev:sdb Jul  4 14:49:52 rockpi4 kernel: [  491.965188]  disk 2, o:1, dev:sdc Jul  4 14:49:52 rockpi4 kernel: [  491.965603] created bitmap (1 pages) for device md0 Jul  4 14:49:52 rockpi4 kernel: [  491.966746] md0: bitmap initialized from disk: read 1 pages, set 1788 of 1788 bits Jul  4 14:49:52 rockpi4 kernel: [  491.968765] md0: detected capacity change from 0 to 359899594752 Jul  4 14:49:52 rockpi4 kernel: [  491.969465] md: md0 switched to read-write mode. Jul  4 14:49:52 rockpi4 kernel: [  491.969930] RAID conf printout: Jul  4 14:49:52 rockpi4 kernel: [  491.969951]  --- level:5 rd:4 wd:3 Jul  4 14:49:52 rockpi4 kernel: [  491.969968]  disk 0, o:1, dev:sda Jul  4 14:49:52 rockpi4 kernel: [  491.969984]  disk 1, o:1, dev:sdb Jul  4 14:49:52 rockpi4 kernel: [  491.969997]  disk 2, o:1, dev:sdc Jul  4 14:49:52 rockpi4 kernel: [  491.970009]  disk 3, o:1, dev:sdd Jul  4 14:49:52 rockpi4 kernel: [  491.980149] md: recovery of RAID array md0 Jul  4 14:49:52 rockpi4 kernel: [  491.980523] md: minimum _guaranteed_  speed: 1000 KB/sec/disk. Jul  4 14:49:52 rockpi4 kernel: [  491.981044] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery. Jul  4 14:49:52 rockpi4 kernel: [  491.981894] md: using 128k window, over a total of 117154816k. Jul  4 14:51:41 rockpi4 kernel: [  601.050246] panel disable Jul  4 15:00:30 rockpi4 anacron[1052]: Anacron 2.3 started on 2019-07-04 Jul  4 15:00:30 rockpi4 anacron[1052]: Normal exit (0 jobs run) Jul  4 15:05:53 rockpi4 kernel: [ 1453.287257] md: md0: recovery done. Jul  4 15:05:53 rockpi4 kernel: [ 1453.567652] RAID conf printout: Jul  4 15:05:53 rockpi4 kernel: [ 1453.567661]  --- level:5 rd:4 wd:4 Jul  4 15:05:53 rockpi4 kernel: [ 1453.567666]  disk 0, o:1, dev:sda Jul  4 15:05:53 rockpi4 kernel: [ 1453.567670]  disk 1, o:1, dev:sdb Jul  4 15:05:53 rockpi4 kernel: [ 1453.567674]  disk 2, o:1, dev:sdc Jul  4 15:05:53 rockpi4 kernel: [ 1453.567677]  disk 3, o:1, dev:sdd Jul  4 15:07:07 rockpi4 kernel: [ 1527.108599] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)  
            Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2         Output is in kBytes/sec         Time Resolution = 0.000001 seconds.         Processor cache size set to 1024 kBytes.         Processor cache line size set to 32 bytes.         File stride size set to 17 * record size.                                                               random    random     bkwd    record    stride               kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread           102400       4     8159     8947    43789    42643    24543    10212           102400      16    33078    40985    98244    98407    70763    41851           102400     512    52870    53418   212184   202157   203772    50657           102400    1024    66426    69555   250660   250200   249607    69539           102400   16384   108537   112300   326090   324173   320777   106363 **********************************************************************************************************************************
    RAID1
     
    rock@rockpi4:~$ sudo mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sda /dev/sdb mdadm: Note: this array has metadata at the start and     may not be suitable as a boot device.  If you plan to     store '/boot' on this device please ensure that     your boot-loader understands md/v1.x metadata, or use     --metadata=0.90 mdadm: size set to 117155264K mdadm: automatically enabling write-intent bitmap on large array Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. rock@rockpi4:~$ cat /proc/mdstat Personalities : [raid10] [raid6] [raid5] [raid4] [raid1] md0 : active raid1 sdb[1] sda[0]       117155264 blocks super 1.2 [2/2] [UU]       [>....................]  resync =  2.3% (2801408/117155264) finish=8.8min speed=215492K/sec       bitmap: 1/1 pages [4KB], 65536KB chunk unused devices: <none> Jul  4 15:20:25 rockpi4 kernel: [ 2324.757953] md: bind<sda> Jul  4 15:20:25 rockpi4 kernel: [ 2324.759742] md: bind<sdb> Jul  4 15:20:25 rockpi4 kernel: [ 2324.772561] md: raid1 personality registered for level 1 Jul  4 15:20:25 rockpi4 kernel: [ 2324.783910] md/raid1:md0: not clean -- starting background reconstruction Jul  4 15:20:25 rockpi4 kernel: [ 2324.784534] md/raid1:md0: active with 2 out of 2 mirrors Jul  4 15:20:25 rockpi4 kernel: [ 2324.785261] created bitmap (1 pages) for device md0 Jul  4 15:20:25 rockpi4 kernel: [ 2324.787956] md0: bitmap initialized from disk: read 1 pages, set 1788 of 1788 bits Jul  4 15:20:25 rockpi4 kernel: [ 2324.790798] md0: detected capacity change from 0 to 119966990336 Jul  4 15:20:25 rockpi4 kernel: [ 2324.791556] md: md0 switched to read-write mode. Jul  4 15:20:25 rockpi4 kernel: [ 2324.794162] md: resync of RAID array md0 Jul  4 15:20:25 rockpi4 kernel: [ 2324.794546] md: minimum _guaranteed_  speed: 1000 KB/sec/disk. Jul  4 15:20:25 rockpi4 kernel: [ 2324.795124] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync. Jul  4 15:20:25 rockpi4 kernel: [ 2324.795964] md: using 128k window, over a total of 117155264k. Jul  4 15:30:14 rockpi4 kernel: [ 2913.737079] md: md0: resync done. Jul  4 15:30:14 rockpi4 kernel: [ 2913.745998] RAID1 conf printout: Jul  4 15:30:14 rockpi4 kernel: [ 2913.746016]  --- wd:2 rd:2 Jul  4 15:30:14 rockpi4 kernel: [ 2913.746027]  disk 0, wo:0, o:1, dev:sda Jul  4 15:30:14 rockpi4 kernel: [ 2913.746035]  disk 1, wo:0, o:1, dev:sdb Jul  4 15:31:19 rockpi4 kernel: [ 2978.675630] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)         Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2         Output is in kBytes/sec         Time Resolution = 0.000001 seconds.         Processor cache size set to 1024 kBytes.         Processor cache line size set to 32 bytes.         File stride size set to 17 * record size.                                                               random    random     bkwd    record    stride               kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread           102400       4    24759    31559    39765    41196    25476    30710           102400      16    62662    73245   124756   125744    62209    72778           102400     512   139397   160038   260433   261606   218154   147652           102400    1024   165815   155189   258119   261744   232643   164702           102400   16384   172905   186702   318211   322998   321997   170680 ******************************************************************************8
    RAID0

     
    rock@rockpi4:~$ sudo mdadm --create --verbose /dev/md0 --level=0 --raid-devices=4 /dev/sda /dev/sdb  /dev/sdc /dev/sdd mdadm: chunk size defaults to 512K mdadm: /dev/sdc appears to be part of a raid array:        level=raid0 devices=0 ctime=Thu Jan  1 00:00:00 1970 mdadm: partition table exists on /dev/sdc but will be lost or        meaningless after creating array Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. rock@rockpi4:~$ cat /proc/mdstat Personalities : [raid10] [raid6] [raid5] [raid4] [raid1] [raid0] md0 : active raid0 sdd[3] sdc[2] sdb[1] sda[0]       468619264 blocks super 1.2 512k chunks unused devices: <none> Jul  4 15:38:35 rockpi4 kernel: [ 3415.084442] md: bind<sda> Jul  4 15:38:35 rockpi4 kernel: [ 3415.085523] md: bind<sdb> Jul  4 15:38:35 rockpi4 kernel: [ 3415.086511] md: bind<sdc> Jul  4 15:38:35 rockpi4 kernel: [ 3415.087930] md: bind<sdd> Jul  4 15:38:35 rockpi4 kernel: [ 3415.101830] md: raid0 personality registered for level 0 Jul  4 15:38:35 rockpi4 kernel: [ 3415.101836]  sdc: sdc1 sdc2 Jul  4 15:38:35 rockpi4 kernel: [ 3415.107953] md/raid0:md0: md_size is 937238528 sectors. Jul  4 15:38:35 rockpi4 kernel: [ 3415.108427] md: RAID0 configuration for md0 - 1 zone Jul  4 15:38:35 rockpi4 kernel: [ 3415.108866] md: zone0=[sda/sdb/sdc/sdd] Jul  4 15:38:35 rockpi4 kernel: [ 3415.109261]       zone-offset=         0KB, device-offset=         0KB, size= 468619264KB Jul  4 15:38:35 rockpi4 kernel: [ 3415.109973]  Jul  4 15:38:35 rockpi4 kernel: [ 3415.110235] md0: detected capacity change from 0 to 479866126336 Jul  4 15:38:35 rockpi4 udisksd[572]: Error creating watch for file /sys/devices/virtual/block/md0/md/sync_action: No such file or directory (g-file-error-quark, 4) Jul  4 15:38:35 rockpi4 udisksd[572]: Error creating watch for file /sys/devices/virtual/block/md0/md/degraded: No such file or directory (g-file-error-quark, 4) Jul  4 15:41:08 rockpi4 kernel: [ 3568.278677] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
     
            Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2         Output is in kBytes/sec         Time Resolution = 0.000001 seconds.         Processor cache size set to 1024 kBytes.         Processor cache line size set to 32 bytes.         File stride size set to 17 * record size.                                                               random    random     bkwd    record    stride               kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread           102400       4    31874    42784    44859    48796    26191    42465           102400      16    89104   112188   110570   114486    77652   111816           102400     512   248787   259180   258800   270097   227197   229707           102400    1024   309271   324243   293455   293122   268819   286143           102400   16384   373574   382208   324869   326204   326070   380622 Concurrent single disks
     
            Command line used: iozone -l 4 -u 4 -r 16k -s 512M -F /home/rock/sda/tmp1 /home/rock/sdb/tmp2 /home/rock/sdc/tmp3 /home/rock/sdd/tmp4         Output is in kBytes/sec         Time Resolution = 0.000001 seconds.         Processor cache size set to 1024 kBytes.         Processor cache line size set to 32 bytes.         File stride size set to 17 * record size.         Min process = 4         Max process = 4         Throughput test with 4 processes         Each process writes a 524288 kByte file in 16 kByte records         Children see throughput for  4 initial writers  =  468982.85 kB/sec         Parent sees throughput for  4 initial writers   =  391562.16 kB/sec         Min throughput per process                      =  115979.48 kB/sec         Max throughput per process                      =  118095.79 kB/sec         Avg throughput per process                      =  117245.71 kB/sec         Min xfer                                        =  513488.00 kB         Children see throughput for  4 rewriters        =  448753.70 kB/sec         Parent sees throughput for  4 rewriters         =  378103.46 kB/sec         Min throughput per process                      =  108174.91 kB/sec         Max throughput per process                      =  119841.15 kB/sec         Avg throughput per process                      =  112188.42 kB/sec         Min xfer                                        =  472992.00 kB         Children see throughput for  4 readers          =  319857.60 kB/sec         Parent sees throughput for  4 readers           =  319587.93 kB/sec         Min throughput per process                      =   78386.40 kB/sec         Max throughput per process                      =   81170.33 kB/sec         Avg throughput per process                      =   79964.40 kB/sec         Min xfer                                        =  506336.00 kB         Children see throughput for 4 re-readers        =  331737.53 kB/sec         Parent sees throughput for 4 re-readers         =  331539.26 kB/sec         Min throughput per process                      =   74617.11 kB/sec         Max throughput per process                      =   90278.13 kB/sec         Avg throughput per process                      =   82934.38 kB/sec         Min xfer                                        =  433360.00 kB         Children see throughput for 4 reverse readers   =  769042.86 kB/sec         Parent sees throughput for 4 reverse readers    =  768023.53 kB/sec         Min throughput per process                      =   43320.77 kB/sec         Max throughput per process                      =  262961.66 kB/sec         Avg throughput per process                      =  192260.72 kB/sec         Min xfer                                        =   86384.00 kB         Children see throughput for 4 stride readers    = 1795856.09 kB/sec         Parent sees throughput for 4 stride readers     = 1781767.61 kB/sec         Min throughput per process                      =   65569.88 kB/sec         Max throughput per process                      =  920383.50 kB/sec         Avg throughput per process                      =  448964.02 kB/sec         Min xfer                                        =   37360.00 kB         Children see throughput for 4 random readers    = 1971409.70 kB/sec         Parent sees throughput for 4 random readers     = 1958188.18 kB/sec         Min throughput per process                      =   69869.92 kB/sec         Max throughput per process                      =  861175.75 kB/sec         Avg throughput per process                      =  492852.43 kB/sec         Min xfer                                        =   41904.00 kB         Children see throughput for 4 mixed workload    = 1176863.17 kB/sec         Parent sees throughput for 4 mixed workload     =  275991.88 kB/sec         Min throughput per process                      =   98414.23 kB/sec         Max throughput per process                      =  606498.81 kB/sec         Avg throughput per process                      =  294215.79 kB/sec         Min xfer                                        =   84304.00 kB         Children see throughput for 4 random writers    =  428459.84 kB/sec         Parent sees throughput for 4 random writers     =  318774.34 kB/sec         Min throughput per process                      =   96696.56 kB/sec         Max throughput per process                      =  118440.29 kB/sec         Avg throughput per process                      =  107114.96 kB/sec         Min xfer                                        =  428352.00 kB         Children see throughput for 4 pwrite writers    =  467800.79 kB/sec         Parent sees throughput for 4 pwrite writers     =  381736.33 kB/sec         Min throughput per process                      =  111798.68 kB/sec         Max throughput per process                      =  120814.23 kB/sec         Avg throughput per process                      =  116950.20 kB/sec         Min xfer                                        =  485168.00 kB         Children see throughput for 4 pread readers     =  309714.87 kB/sec         Parent sees throughput for 4 pread readers      =  309501.91 kB/sec         Min throughput per process                      =   76447.56 kB/sec         Max throughput per process                      =   79120.13 kB/sec         Avg throughput per process                      =   77428.72 kB/sec         Min xfer                                        =  506592.00 kB         Children see throughput for  4 fwriters         =  442763.85 kB/sec         Parent sees throughput for  4 fwriters          =  373418.60 kB/sec         Min throughput per process                      =  107828.45 kB/sec         Max throughput per process                      =  114495.70 kB/sec         Avg throughput per process                      =  110690.96 kB/sec         Min xfer                                        =  524288.00 kB         Children see throughput for  4 freaders         =  331765.48 kB/sec         Parent sees throughput for  4 freaders          =  325459.39 kB/sec         Min throughput per process                      =   81387.83 kB/sec         Max throughput per process                      =   86099.32 kB/sec         Avg throughput per process                      =   82941.37 kB/sec         Min xfer                                        =  524288.00 kB single disk sda
     
            Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2         Output is in kBytes/sec         Time Resolution = 0.000001 seconds.         Processor cache size set to 1024 kBytes.         Processor cache line size set to 32 bytes.         File stride size set to 17 * record size.                                                               random    random     bkwd    record    stride               kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread           102400       4    36038    45031    52457    52672    27342    44553           102400      16    93224   115531   124822   114115    79868   115219           102400     512   249415   223799   267595   273488   227651   258480           102400    1024   259449   236700   268852   273148   242803   266988           102400   16384   313281   317096   324922   325600   319687   267843 single disk sdb
     
           Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2         Output is in kBytes/sec         Time Resolution = 0.000001 seconds.         Processor cache size set to 1024 kBytes.         Processor cache line size set to 32 bytes.         File stride size set to 17 * record size.                                                               random    random     bkwd    record    stride               kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread           102400       4    33918    45021    52628    52655    27404    44621           102400      16   100152   106531   127148   115452    76579   113503           102400     512   251035   259812   272338   273634   227332   225607           102400    1024   260791   268019   273578   276074   241042   268323           102400   16384   267448   316877   323467   324679   319983   316710 single disk sdc
     
            Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2         Output is in kBytes/sec         Time Resolution = 0.000001 seconds.         Processor cache size set to 1024 kBytes.         Processor cache line size set to 32 bytes.         File stride size set to 17 * record size.                                                               random    random     bkwd    record    stride               kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread           102400       4    36074    44819    52358    52592    23334    44073           102400      16    92510   114568   127346   126830    72293   112819           102400     512   220032   260191   271136   274745   225818   258574           102400    1024   258895   228236   270047   271946   239184   267370           102400   16384   312151   316425   318919   323689   317570   268308 single disk sdd
     
            Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2         Output is in kBytes/sec         Time Resolution = 0.000001 seconds.         Processor cache size set to 1024 kBytes.         Processor cache line size set to 32 bytes.         File stride size set to 17 * record size.                                                               random    random     bkwd    record    stride               kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread           102400       4    36100    44939    52756    52768    27569    42697           102400      16   100207   111073   127120   118992    76555   105342           102400     512   248869   259052   271718   272745   227450   258252           102400    1024   226653   266979   262772   265104   236617   266018           102400   16384   314211   269062   322937   325634   320150   315470
     

     
  11. Like
    Stuart Naylor got a reaction from Jens Bauer in Software RAID testing Rockpi4 Marvell 4 port sata   
    Forgot to set the RockPi4 to pcie2 doh!

     
    Also if you are a plonker and forget to edit  `/boot/hw_intfc.conf` from `#intfc:dtoverlay=pcie-gen2` to `intfc:dtoverlay=pcie-gen2` you will be running on pcie-gen1
    RAID 10
     
            Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2         Output is in kBytes/sec         Time Resolution = 0.000001 seconds.         Processor cache size set to 1024 kBytes.         Processor cache line size set to 32 bytes.         File stride size set to 17 * record size.                                                               random    random     bkwd    record    stride               kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread           102400       4    11719    15447    55220    53720    25421    12773           102400      16    39410    54840   139482   145128    81258    43792           102400     512   228002   220126   334104   339660   265930   225507           102400    1024   244376   243730   451377   462467   397566   258481           102400   16384   270088   304411   597462   610057   615669   297855

    RAID 5
     
            Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2         Output is in kBytes/sec         Time Resolution = 0.000001 seconds.         Processor cache size set to 1024 kBytes.         Processor cache line size set to 32 bytes.         File stride size set to 17 * record size.                                                               random    random     bkwd    record    stride               kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread           102400       4     6133     6251    47505    46013    25046     8190           102400      16    17103    17134   113272   133606    79753    20420           102400     512    61418    50852   241860   246467   244030    58031           102400    1024    79325    73325   363343   359830   361882    83655           102400   16384   127548   124702   625256   642094   650407   136680

    RAID 1
     
            Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2         Output is in kBytes/sec         Time Resolution = 0.000001 seconds.         Processor cache size set to 1024 kBytes.         Processor cache line size set to 32 bytes.         File stride size set to 17 * record size.                                                               random    random     bkwd    record    stride               kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread           102400       4    23713    29698    45608    45983    23657    30381           102400      16    79205    82546   138060   144557    82126    93921           102400     512   212859   221943   307613   304036   259783   179355           102400    1024   235985   243783   366101   369935   317354   198861           102400   16384   289036   290279   410520   398875   399868   295329

    RAID 0
     
            Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2         Output is in kBytes/sec         Time Resolution = 0.000001 seconds.         Processor cache size set to 1024 kBytes.         Processor cache line size set to 32 bytes.         File stride size set to 17 * record size.                                                               random    random     bkwd    record    stride               kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread           102400       4    33519    47927    52701    51023    26700    46382           102400      16   105763   132604   138080   155514    87026   135111           102400     512   276220   320320   311343   294629   267624   335363           102400    1024   493565   522038   463105   470833   398584   522560           102400   16384   687516   701200   625733   623531   555318   681535 4 individual disks conurrent

     
            Command line used: iozone -l 4 -u 4 -r 16k -s 512M -F /srv/dev-disk-by-label-sda/tmp1 /srv/dev-disk-by-label-sdb/tmp2 /srv/dev-disk-by-label-sdc/tmp3 /srv/dev-disk-by-label-sdd/tmp4         Output is in kBytes/sec         Time Resolution = 0.000001 seconds.         Processor cache size set to 1024 kBytes.         Processor cache line size set to 32 bytes.         File stride size set to 17 * record size.         Min process = 4         Max process = 4         Throughput test with 4 processes         Each process writes a 524288 kByte file in 16 kByte records         Children see throughput for  4 initial writers  =  884590.91 kB/sec         Parent sees throughput for  4 initial writers   =  701620.17 kB/sec         Min throughput per process                      =  195561.27 kB/sec         Max throughput per process                      =  234457.59 kB/sec         Avg throughput per process                      =  221147.73 kB/sec         Min xfer                                        =  437344.00 kB         Children see throughput for  4 rewriters        =  822771.77 kB/sec         Parent sees throughput for  4 rewriters         =  701488.29 kB/sec         Min throughput per process                      =  180381.25 kB/sec         Max throughput per process                      =  232223.50 kB/sec         Avg throughput per process                      =  205692.94 kB/sec         Min xfer                                        =  408720.00 kB         Children see throughput for  4 readers          =  755252.30 kB/sec         Parent sees throughput for  4 readers           =  753357.02 kB/sec         Min throughput per process                      =  169105.11 kB/sec         Max throughput per process                      =  198976.81 kB/sec         Avg throughput per process                      =  188813.07 kB/sec         Min xfer                                        =  445664.00 kB         Children see throughput for 4 re-readers        =  753492.39 kB/sec         Parent sees throughput for 4 re-readers         =  750353.64 kB/sec         Min throughput per process                      =  160626.64 kB/sec         Max throughput per process                      =  201223.11 kB/sec         Avg throughput per process                      =  188373.10 kB/sec         Min xfer                                        =  418528.00 kB         Children see throughput for 4 reverse readers   =  780261.86 kB/sec         Parent sees throughput for 4 reverse readers    =  778761.55 kB/sec         Min throughput per process                      =   58371.02 kB/sec         Max throughput per process                      =  254657.08 kB/sec         Avg throughput per process                      =  195065.47 kB/sec         Min xfer                                        =  120192.00 kB         Children see throughput for 4 stride readers    =  317923.62 kB/sec         Parent sees throughput for 4 stride readers     =  316905.36 kB/sec         Min throughput per process                      =   63171.63 kB/sec         Max throughput per process                      =   98114.27 kB/sec         Avg throughput per process                      =   79480.91 kB/sec         Min xfer                                        =  337600.00 kB         Children see throughput for 4 random readers    =  798898.78 kB/sec         Parent sees throughput for 4 random readers     =  794905.95 kB/sec         Min throughput per process                      =   57059.89 kB/sec         Max throughput per process                      =  391248.59 kB/sec         Avg throughput per process                      =  199724.70 kB/sec         Min xfer                                        =   76480.00 kB         Children see throughput for 4 mixed workload    =  647158.06 kB/sec         Parent sees throughput for 4 mixed workload     =  491223.65 kB/sec         Min throughput per process                      =   28319.04 kB/sec         Max throughput per process                      =  305288.75 kB/sec         Avg throughput per process                      =  161789.51 kB/sec         Min xfer                                        =   48720.00 kB         Children see throughput for 4 random writers    =  734947.98 kB/sec         Parent sees throughput for 4 random writers     =  544531.66 kB/sec         Min throughput per process                      =  167241.00 kB/sec         Max throughput per process                      =  207134.38 kB/sec         Avg throughput per process                      =  183737.00 kB/sec         Min xfer                                        =  424704.00 kB         Children see throughput for 4 pwrite writers    =  879712.72 kB/sec         Parent sees throughput for 4 pwrite writers     =  686621.58 kB/sec         Min throughput per process                      =  186624.69 kB/sec         Max throughput per process                      =  236047.30 kB/sec         Avg throughput per process                      =  219928.18 kB/sec         Min xfer                                        =  415856.00 kB         Children see throughput for 4 pread readers     =  777243.34 kB/sec         Parent sees throughput for 4 pread readers      =  773302.81 kB/sec         Min throughput per process                      =  184983.08 kB/sec         Max throughput per process                      =  203392.77 kB/sec         Avg throughput per process                      =  194310.84 kB/sec         Min xfer                                        =  476896.00 kB         Children see throughput for  4 fwriters         =  820877.50 kB/sec         Parent sees throughput for  4 fwriters          =  693823.17 kB/sec         Min throughput per process                      =  194228.28 kB/sec         Max throughput per process                      =  217311.28 kB/sec         Avg throughput per process                      =  205219.38 kB/sec         Min xfer                                        =  524288.00 kB         Children see throughput for  4 freaders         = 1924029.62 kB/sec         Parent sees throughput for  4 freaders          = 1071393.99 kB/sec         Min throughput per process                      =  268087.50 kB/sec         Max throughput per process                      =  970331.94 kB/sec         Avg throughput per process                      =  481007.41 kB/sec         Min xfer                                        =  524288.00 kB Single disk sda reference
     
            Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2         Output is in kBytes/sec         Time Resolution = 0.000001 seconds.         Processor cache size set to 1024 kBytes.         Processor cache line size set to 32 bytes.         File stride size set to 17 * record size.                                                               random    random     bkwd    record    stride               kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread           102400       4    35191    45728    56689    53307    27889    48508           102400      16   104379   122405   154385   157484    88670   113964           102400     512   315788   347042   351932   348604   271399   288430           102400    1024   358399   366194   388893   379453   338470   369888           102400   16384   353154   443256   425396   422384   410580   444530
     
  12. Like
    Stuart Naylor got a reaction from lanefu in Software RAID testing Rockpi4 Marvell 4 port sata   
    Forgot to set the RockPi4 to pcie2 doh!

     
    Also if you are a plonker and forget to edit  `/boot/hw_intfc.conf` from `#intfc:dtoverlay=pcie-gen2` to `intfc:dtoverlay=pcie-gen2` you will be running on pcie-gen1
    RAID 10
     
            Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2         Output is in kBytes/sec         Time Resolution = 0.000001 seconds.         Processor cache size set to 1024 kBytes.         Processor cache line size set to 32 bytes.         File stride size set to 17 * record size.                                                               random    random     bkwd    record    stride               kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread           102400       4    11719    15447    55220    53720    25421    12773           102400      16    39410    54840   139482   145128    81258    43792           102400     512   228002   220126   334104   339660   265930   225507           102400    1024   244376   243730   451377   462467   397566   258481           102400   16384   270088   304411   597462   610057   615669   297855

    RAID 5
     
            Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2         Output is in kBytes/sec         Time Resolution = 0.000001 seconds.         Processor cache size set to 1024 kBytes.         Processor cache line size set to 32 bytes.         File stride size set to 17 * record size.                                                               random    random     bkwd    record    stride               kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread           102400       4     6133     6251    47505    46013    25046     8190           102400      16    17103    17134   113272   133606    79753    20420           102400     512    61418    50852   241860   246467   244030    58031           102400    1024    79325    73325   363343   359830   361882    83655           102400   16384   127548   124702   625256   642094   650407   136680

    RAID 1
     
            Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2         Output is in kBytes/sec         Time Resolution = 0.000001 seconds.         Processor cache size set to 1024 kBytes.         Processor cache line size set to 32 bytes.         File stride size set to 17 * record size.                                                               random    random     bkwd    record    stride               kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread           102400       4    23713    29698    45608    45983    23657    30381           102400      16    79205    82546   138060   144557    82126    93921           102400     512   212859   221943   307613   304036   259783   179355           102400    1024   235985   243783   366101   369935   317354   198861           102400   16384   289036   290279   410520   398875   399868   295329

    RAID 0
     
            Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2         Output is in kBytes/sec         Time Resolution = 0.000001 seconds.         Processor cache size set to 1024 kBytes.         Processor cache line size set to 32 bytes.         File stride size set to 17 * record size.                                                               random    random     bkwd    record    stride               kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread           102400       4    33519    47927    52701    51023    26700    46382           102400      16   105763   132604   138080   155514    87026   135111           102400     512   276220   320320   311343   294629   267624   335363           102400    1024   493565   522038   463105   470833   398584   522560           102400   16384   687516   701200   625733   623531   555318   681535 4 individual disks conurrent

     
            Command line used: iozone -l 4 -u 4 -r 16k -s 512M -F /srv/dev-disk-by-label-sda/tmp1 /srv/dev-disk-by-label-sdb/tmp2 /srv/dev-disk-by-label-sdc/tmp3 /srv/dev-disk-by-label-sdd/tmp4         Output is in kBytes/sec         Time Resolution = 0.000001 seconds.         Processor cache size set to 1024 kBytes.         Processor cache line size set to 32 bytes.         File stride size set to 17 * record size.         Min process = 4         Max process = 4         Throughput test with 4 processes         Each process writes a 524288 kByte file in 16 kByte records         Children see throughput for  4 initial writers  =  884590.91 kB/sec         Parent sees throughput for  4 initial writers   =  701620.17 kB/sec         Min throughput per process                      =  195561.27 kB/sec         Max throughput per process                      =  234457.59 kB/sec         Avg throughput per process                      =  221147.73 kB/sec         Min xfer                                        =  437344.00 kB         Children see throughput for  4 rewriters        =  822771.77 kB/sec         Parent sees throughput for  4 rewriters         =  701488.29 kB/sec         Min throughput per process                      =  180381.25 kB/sec         Max throughput per process                      =  232223.50 kB/sec         Avg throughput per process                      =  205692.94 kB/sec         Min xfer                                        =  408720.00 kB         Children see throughput for  4 readers          =  755252.30 kB/sec         Parent sees throughput for  4 readers           =  753357.02 kB/sec         Min throughput per process                      =  169105.11 kB/sec         Max throughput per process                      =  198976.81 kB/sec         Avg throughput per process                      =  188813.07 kB/sec         Min xfer                                        =  445664.00 kB         Children see throughput for 4 re-readers        =  753492.39 kB/sec         Parent sees throughput for 4 re-readers         =  750353.64 kB/sec         Min throughput per process                      =  160626.64 kB/sec         Max throughput per process                      =  201223.11 kB/sec         Avg throughput per process                      =  188373.10 kB/sec         Min xfer                                        =  418528.00 kB         Children see throughput for 4 reverse readers   =  780261.86 kB/sec         Parent sees throughput for 4 reverse readers    =  778761.55 kB/sec         Min throughput per process                      =   58371.02 kB/sec         Max throughput per process                      =  254657.08 kB/sec         Avg throughput per process                      =  195065.47 kB/sec         Min xfer                                        =  120192.00 kB         Children see throughput for 4 stride readers    =  317923.62 kB/sec         Parent sees throughput for 4 stride readers     =  316905.36 kB/sec         Min throughput per process                      =   63171.63 kB/sec         Max throughput per process                      =   98114.27 kB/sec         Avg throughput per process                      =   79480.91 kB/sec         Min xfer                                        =  337600.00 kB         Children see throughput for 4 random readers    =  798898.78 kB/sec         Parent sees throughput for 4 random readers     =  794905.95 kB/sec         Min throughput per process                      =   57059.89 kB/sec         Max throughput per process                      =  391248.59 kB/sec         Avg throughput per process                      =  199724.70 kB/sec         Min xfer                                        =   76480.00 kB         Children see throughput for 4 mixed workload    =  647158.06 kB/sec         Parent sees throughput for 4 mixed workload     =  491223.65 kB/sec         Min throughput per process                      =   28319.04 kB/sec         Max throughput per process                      =  305288.75 kB/sec         Avg throughput per process                      =  161789.51 kB/sec         Min xfer                                        =   48720.00 kB         Children see throughput for 4 random writers    =  734947.98 kB/sec         Parent sees throughput for 4 random writers     =  544531.66 kB/sec         Min throughput per process                      =  167241.00 kB/sec         Max throughput per process                      =  207134.38 kB/sec         Avg throughput per process                      =  183737.00 kB/sec         Min xfer                                        =  424704.00 kB         Children see throughput for 4 pwrite writers    =  879712.72 kB/sec         Parent sees throughput for 4 pwrite writers     =  686621.58 kB/sec         Min throughput per process                      =  186624.69 kB/sec         Max throughput per process                      =  236047.30 kB/sec         Avg throughput per process                      =  219928.18 kB/sec         Min xfer                                        =  415856.00 kB         Children see throughput for 4 pread readers     =  777243.34 kB/sec         Parent sees throughput for 4 pread readers      =  773302.81 kB/sec         Min throughput per process                      =  184983.08 kB/sec         Max throughput per process                      =  203392.77 kB/sec         Avg throughput per process                      =  194310.84 kB/sec         Min xfer                                        =  476896.00 kB         Children see throughput for  4 fwriters         =  820877.50 kB/sec         Parent sees throughput for  4 fwriters          =  693823.17 kB/sec         Min throughput per process                      =  194228.28 kB/sec         Max throughput per process                      =  217311.28 kB/sec         Avg throughput per process                      =  205219.38 kB/sec         Min xfer                                        =  524288.00 kB         Children see throughput for  4 freaders         = 1924029.62 kB/sec         Parent sees throughput for  4 freaders          = 1071393.99 kB/sec         Min throughput per process                      =  268087.50 kB/sec         Max throughput per process                      =  970331.94 kB/sec         Avg throughput per process                      =  481007.41 kB/sec         Min xfer                                        =  524288.00 kB Single disk sda reference
     
            Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2         Output is in kBytes/sec         Time Resolution = 0.000001 seconds.         Processor cache size set to 1024 kBytes.         Processor cache line size set to 32 bytes.         File stride size set to 17 * record size.                                                               random    random     bkwd    record    stride               kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread           102400       4    35191    45728    56689    53307    27889    48508           102400      16   104379   122405   154385   157484    88670   113964           102400     512   315788   347042   351932   348604   271399   288430           102400    1024   358399   366194   388893   379453   338470   369888           102400   16384   353154   443256   425396   422384   410580   444530
     
  13. Like
    Stuart Naylor got a reaction from lanefu in RAM Disk with Zram   
    Tido is right as its an example of zram creating a ram drive that in that uses a zram drive for log files.
    It would take minimal change for it to be dbstorage instead and you could use the same mechanisms on start and stop to transfer to media to make it persistent.

    With Lz4 you would get minimal and near ram speed with large compression gains or use pure tmpfs ram.
    I don't like how log2ram writes out full files every hour and did my own version https://github.com/StuartIanNaylor/log2zram but its all based on azlux's work which is all based on https://debian-administration.org/article/661/A_transient_/var/log

    With log2ram or log2zram you could just change the bind directory which basically moves the directory somewhere else so you can mount a ram drive inplace.
    That is it create bind directory to move your directory.
    Mount you ram drive in the location that was moved.
    Copy the data to ram and go.
    On stop you just do the reverse and copy data back to the moved directory.
    Then unmount ram and bind and it is then persistent.

    With azlux and log2ram each hour it copies out to the moved directory, but that can still leave up to an hours gap and something IMHO is probably pointless.
    If you have lost the last hour via usage of volatile memory storage I am not really sure of the benefit of the hours preceding.
    Thats only if you get a complete crash and no orderly shutdown which you can prob trigger with a watchdog, but if you lose up to the last hour its lost.
    Prob worse with logs as the info of the crash is very likely to be lost and what you do have is of little interest.
    For most usages the complete crash is the exception and not likely to occur, apart from pulling the plug.

    zram is a hotplug system and a simple check is...
     
    if [ ! -d "/sys/class/zram-control" ]; then modprobe --verbose zram RAM_DEV='0' else RAM_DEV=$(cat /sys/class/zram-control/hot_add) Have the same routine in log2zram and zram-swap-config and should be able to run and be first with the modprobe of zram0 or just hot_add another.
    https://github.com/StuartIanNaylor/zram-swap-config does the same as https://github.com/StuartIanNaylor/log2zram and you just have to be aware of other zram services that don't check previous existence, or be sure you run after.
     
  14. Like
    Stuart Naylor reacted to JMCC in Rock64 Rockchip_dri driver missing   
    If I'm not wrong, that error is due to Mali not supporting OpenGL, but only OpenGL ES. In other words: there is no rockchip_dri.so, it just doesn't exist anywhere, and I don't think it will exist.
     
    Rockchip provides a tweaked X server that gives you glamor acceleration through OpenGL ES, but it only works for mali midgard  (RK3288 and 3399). However, you can get 3D acceleration for your RK 3288 3328 (sorry, typo) through the armsoc driver. If you are interested, I can give you more info on it.
  15. Like
    Stuart Naylor reacted to Rosimildo in Armbian for tv box Z28   
    I have got a Z28 board ( 8188 wifi ),  and I have been able to boot Linux from SD card. 
    I used the image from the begin of this thread. 
     
    I'd like to help testing the VPU driver. Is there any  built image with VPU enabled, and possibly with gstreamer/FFMPEG using the VPU capabilities. 
    No need for Mali stuff for now.
     
    Any link with pointers would be nice.
     
    Thanks.
  16. Like
    Stuart Naylor reacted to Rosimildo in Armbian for tv box Z28   
    thanks for all links. I will browse them.
     
    rock64@rock64:~$ uname -a
    Linux rock64 4.4.77-rockchip-ayufan-118 #1 SMP Thu Sep 14 21:59:24 UTC 2017 aarch64 aarch64 aarch64 GNU/Linux
     
    I forgot to mention what works and what don't on the image I used, even though, I upgraded using the "apt-get" commands.
     
    1. "reboot" does not work. It hangs, and a power off is required to restart.
    2.  Ethernet does not work, and "MAC address"  changes on each reboot.
    3.  "Wi-Fi" is really bad, I need to place box close to the Router ( AP ).  Something no more then 5m  ( Chip is 8188 ).
     
     
  17. Like
    Stuart Naylor reacted to zador.blood.stained in ROCK64   
    Took some time to get some actual numbers.
    "crypsetup benchmark" shows similar (within ±5% margin) results on both Xenial and Stretch:
    root@rock64:~# cryptsetup benchmark # Tests are approximate using memory only (no storage IO). PBKDF2-sha1 273066 iterations per second for 256-bit key PBKDF2-sha256 514007 iterations per second for 256-bit key PBKDF2-sha512 214872 iterations per second for 256-bit key PBKDF2-ripemd160 161817 iterations per second for 256-bit key PBKDF2-whirlpool 72817 iterations per second for 256-bit key # Algorithm | Key | Encryption | Decryption aes-cbc 128b 366.3 MiB/s 455.7 MiB/s serpent-cbc 128b 25.0 MiB/s 27.4 MiB/s twofish-cbc 128b 29.4 MiB/s 30.9 MiB/s aes-cbc 256b 314.2 MiB/s 412.9 MiB/s serpent-cbc 256b 25.3 MiB/s 27.4 MiB/s twofish-cbc 256b 29.5 MiB/s 30.9 MiB/s aes-xts 256b 401.9 MiB/s 403.9 MiB/s serpent-xts 256b 26.7 MiB/s 28.0 MiB/s twofish-xts 256b 31.3 MiB/s 31.6 MiB/s aes-xts 512b 365.8 MiB/s 365.4 MiB/s serpent-xts 512b 26.7 MiB/s 27.9 MiB/s twofish-xts 512b 31.4 MiB/s 31.6 MiB/s openssl benchmark results are a little bit different, so I'm not sure if "benchmarking gone wrong" or what
    Jessie:
    root@rock64:~# openssl speed -elapsed -evp aes-128-cbc aes-192-cbc aes-256-cbc (cut) OpenSSL 1.0.2g 1 Mar 2016 built on: reproducible build, date unspecified options:bn(64,64) rc4(ptr,char) des(idx,cisc,16,int) aes(partial) blowfish(ptr) compiler: cc -I. -I.. -I../include -fPIC -DOPENSSL_PIC -DOPENSSL_THREADS -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H -DL_ENDIAN -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -Wl,-Bsymbolic-functions -Wl,-z,relro -Wa,--noexecstack -Wall -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM The 'numbers' are in 1000s of bytes per second processed. type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 163161.40k 436259.80k 729289.90k 906723.33k 975929.34k aes-192-cbc 152362.85k 375675.22k 582690.99k 693259.95k 733563.56k aes-256-cbc 145928.50k 337163.26k 498586.20k 577371.48k 605145.77k Stretch:
    OpenSSL 1.1.0f 25 May 2017 built on: reproducible build, date unspecified options:bn(64,64) rc4(char) des(int) aes(partial) blowfish(ptr) compiler: gcc -DDSO_DLFCN -DHAVE_DLFCN_H -DNDEBUG -DOPENSSL_THREADS -DOPENSSL_NO_STATIC_ENGINE -DOPENSSL_PIC -DOPENSSL_BN_ASM_MONT -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DVPAES_ASM -DECP_NISTZ256_ASM -DPOLY1305_ASM -DOPENSSLDIR="\"/usr/lib/ssl\"" -DENGINESDIR="\"/usr/lib/aarch64-linux-gnu/engines-1.1\"" The 'numbers' are in 1000s of bytes per second processed. type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes 16384 bytes aes-128-cbc 89075.61k 281317.21k 589750.10k 844657.32k 965124.10k 975323.14k aes-192-cbc 85167.28k 252748.95k 487843.41k 655406.42k 727607.98k 733538.99k aes-256-cbc 83124.71k 235290.07k 427535.10k 550874.11k 600997.89k 603417.26k Edit: looks like benchmarking actually went wrong and "-evp" parameter placement (or existence) on the command line affects the benchmark
    Edit 2: Redid and updates Stretch numbers
    Edit 3: Redid and updated Xenial numbers
     
    Had to run "openssl speed -elapsed -evp <alg>" for each algorithm separately.
  18. Like
    Stuart Naylor reacted to willmore in ROCK64   
    @tkaiser, while all of that is correct, there is another use case that you didn't consider.  What about using this for a NAS/router?  One interface towards the internal network and the other to the outside.  Not all of the traffic has to terminate on the Rock64.
     
     
  19. Like
    Stuart Naylor reacted to zador.blood.stained in ROCK64   
    Rock64 also shows exceptionally high numbers for the AES encryption in cryptsetup benchmark, and I wonder if it would also show such high numbers in Syncthing, which would make it a very good node for personal backup infrastructure based on Syncthing (or BTSync which also uses AES).
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines