djurny

Members
  • Content Count

    16
  • Joined

  • Last visited


Reputation Activity

  1. Like
    djurny got a reaction from aprayoga in Helios4 Support   
    Hi,
    A quick update to all who have Helios4 boxes freeze up. After board replacement and switching PSUs, the box still froze from time to time. After the last freeze, I decided to move the rootfs to a USB stick, to rule out anything related to the SDcard.
     
    The SDcard is a SanDisk Ultra. 32GB class 10 A1 UHS-I HC1 type of card, picture below.
     
     
    After using the SDcard for booting kernel + initrd only, the box has been going strong for quite a while, when under load and when idling around:
     
    07:39:29 up 21 days, 29 min, 6 users, load average: 0.14, 0.14, 0.10 Note that the uptime is actually more than shown, but the box has been rebooted due to unrelated issues and some planned downtime.
     
    Hope this will help any of you.
     
    Groetjes,
  2. Like
    djurny got a reaction from sirleon in Helios4 Support   
    Hi,
    A quick update to all who have Helios4 boxes freeze up. After board replacement and switching PSUs, the box still froze from time to time. After the last freeze, I decided to move the rootfs to a USB stick, to rule out anything related to the SDcard.
     
    The SDcard is a SanDisk Ultra. 32GB class 10 A1 UHS-I HC1 type of card, picture below.
     
     
    After using the SDcard for booting kernel + initrd only, the box has been going strong for quite a while, when under load and when idling around:
     
    07:39:29 up 21 days, 29 min, 6 users, load average: 0.14, 0.14, 0.10 Note that the uptime is actually more than shown, but the box has been rebooted due to unrelated issues and some planned downtime.
     
    Hope this will help any of you.
     
    Groetjes,
  3. Like
    djurny reacted to aprayoga in Helios4 Support   
    Hi,
     
    could you append  extraargs=no_console_suspend ignore_loglevel to /boot/armbianEnv.txt and set loglevel to 8?
    echo "extraargs=no_console_suspend ignore_loglevel" | sudo tee -a /boot/armbianEnv.txt sudo sed -i 's/exit 0/dmesg -n 8\nexit 0/g' /etc/rc.local then reboot the system to apply the changes.
    Those command would disable log filtering and print some debug info when the system entering suspend mode.
     
  4. Like
    djurny reacted to gprovost in Helios4 - Cryptographic Engines And Security Accelerator (CESA) Benchmarking   
    Just for info I updated the libssl1.0.2 cryptodev patch to work with latest libssl1.0.2 debian deb (libssl1.0.2_1.0.2r)
     
    https://wiki.kobol.io/cesa/#network-application-encryption-acceleration
     
    So if some people wanna test offload crypto operation while using apache2 or ssh for example ;-) You can also find a pre-build dpkg here if you want to skip the compile steps.
  5. Like
    djurny got a reaction from gprovost in Helios4 - Cryptographic Engines And Security Accelerator (CESA) Benchmarking   
    L.S.,
    A quick update on the anecdotal performance of LUKS2 over LUKS;
     
    md5sum'ing ~1 TiB of datafiles on LUKS:
    avg-cpu: %user %nice %system %iowait %steal %idle 0.15 20.01 59.48 2.60 0.00 17.76 Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn sdb 339.20 84.80 0.00 848 0 dm-2 339.20 84.80 0.00 848 0 md5sum'ing ~1 TiB of datafiles on LUKS2:
    avg-cpu: %user %nice %system %iowait %steal %idle 0.05 32.37 36.32 0.75 0.00 30.52 Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn sdd 532.70 133.18 0.00 1331 0 dm-0 532.80 133.20 0.00 1332 0 sdb:
    sdb1 optimally aligned using parted. LUKS(1) w/aes-cbc-essiv:256, w/256 bit key size. XFS with 4096 Bytes sector size. xfs_fsr'd regularly, negligible file fragmentation.  sdd:
    sdd1 optimally aligned using parted. LUKS2 w/aes-cbc-essiv:256, w/256 bit key size and w/4096 Bytes sector size, as @matzman666 suggested. XFS with 4096 Bytes sector size.  
    Content-wise: sdd1 is a file-based copy of sdb1 (about to wrap up the migration from LUKS(1) to LUKS2).
     
    Overall a very nice improvement!
     
    Groetjes,
     
    p.s. Not sure if it added to the performance, but I also spread out the IRQ assignments over both CPUs, making sure that each CESA and XOR engine have their own CPU. Originally I saw that all IRQs are handled by the one and same CPU. For reasons yet unclear, irqbalance refused to dynamically reallocate the IRQs over the CPUs. Perhaps the algorithm used by irqbalance does not apply well to ARM or the Armada SoC (- initial assumption is something with cpumask being reported as '00000002' causing irqbalance to only balance on- and to CPU1?).
  6. Like
    djurny got a reaction from gprovost in Helios4 - Cryptographic Engines And Security Accelerator (CESA) Benchmarking   
    @matzman666 Sorry, no measurements. From memory the numbers for raw read performance were way above 100MB/sec according to 'hdparm -t'. Currently my box is live, so no more testing with empty FSes on unencrypted devices for now. Perhaps someone else can help out?
     
    -[ edit ]-
    So my 2nd box is alive. The setup is slightly different and not yet complete.
     
    I quickly built cryptsetup 2.x from sources on Armbian, was not as tough as I expected - pretty straightforward: configure, correct & configure, correct & ...
     
    cryptsetup 2.x requires the following packages to be installed:
    uuid-dev libdevmapper-dev libpopt-dev pkg-config libgcrypt-dev libblkid-dev  
    Not sure about these ones, but I installed them anyway:
    libdmraid-dev libjson-c-dev  
    Build & install:
    Download cryptsetup 2.x via https://gitlab.com/cryptsetup/cryptsetup/tree/master.  Unxz the tarball. ./configure --prefix=/usr/local make sudo make install sudo ldconfig  
    Configuration:
    2x WD Blue 2TB HDDs. 4KiB sector aligned GPT partitions. mdadm RAID6 (degraded).  
    Test:
    Write 10GiB 4GiB worth of zeroes; dd if=/dev/zero of=[dut] bs=4096 count=1048576 conv=fsync. directly to mdadm device. to a file on an XFS filesystem on top of an mdadm device. directly to LUKS2 device on top of an mdadm device (512B/4096KiB sector sizes for LUKS2). to a file on an XFS filesystem on top of a LUKS2 device on top of an mdadm device (512B/4096KiB sector sizes for LUKS2).  
    Results: 


    Caveat:
    CPU load is high: >75% due to mdadm using CPU for parity calculations. If using the box as just a fileserver for a handful of clients, this should be no problem. But if more processing is done besides serving up files, e.g. transcoding, (desktop) applications, this might become problematic. RAID6 under test was in degraded mode. I don't have enough disks to have a fully functional RAID6 array yet. No time to tear down my old server yet. Having a full RAID6 array might impact parity calculations and add 2x more disk I/O to the mix.  
    I might consider re-encrypting the disks on my first box, to see if LUKS2 w/4KiB sectors will increase the SnapRAID performance over the LUKS(1) w/512B sectors. Currently it takes over 13 hours to scrub 50% on a 2-parity SnapRAID configuration holding less than 4TB of data.
     
    -[ update ]-
    Additional test:
    Write/read 4GiB worth of zeroes to a file on an XFS filesystem test on armbian/linux packages 5.73 (upgraded from 5.70) for i in {0..9} ;
    do time dd if=/dev/zero of=removeme.${i} bs=4096 count=$(( 4 * 1024 * 1024 * 1024 / 4096 )) conv=fsync;
    dd if=removeme.$(( 9 - ${i} )) of=/dev/null bs=4096 ;
    done 2>&1 | egrep '.*bytes.*copied.*'  
    Results:

     
    The write throughput appears to be slightly higher, as now the XOR HW engine is being used - but it could just as well be measurement noise.
    CPU load is still quite high during this new test:
    %Cpu0 : 0.0 us, 97.5 sy, 1.9 ni, 0.3 id, 0.3 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu1 : 0.3 us, 91.4 sy, 1.1 ni, 0.0 id, 0.0 wa, 0.0 hi, 7.2 si, 0.0 st <snip> 176 root      20   0       0      0      0 R  60.9  0.0  14:59.42 md0_raid6                                                     1807 root      30  10    1392    376    328 R  45.7  0.0   0:10.97 dd 9087 root       0 -20       0      0      0 D  28.5  0.0   1:16.53 kworker/u5:1 34 root       0 -20       0      0      0 D  19.0  0.0   0:53.93 kworker/u5:0                                                       149 root     -51   0       0      0      0 S  14.4  0.0   3:19.35 irq/38-f1090000                                                 150 root     -51   0       0      0      0 S  13.6  0.0   3:12.89 irq/39-f1090000                                                5567 root      20   0       0      0      0 S   9.5  0.0   1:09.40 dmcrypt_write <snip> Will update this again once the RAID6 array set up is complete.
     
    Groetjes,