Jump to content

Recommended Posts

Posted

I recently bought an Odroid HC4 to use as NAS.

I then installed the latest Armbian (kernel 5.10, so pretty current one) and zfs-dkms (2.0.3-1, from backports - so again should be pretty current).
I can now create ZFS pools from my attached HDDs (currently just one - may mirror later). Great... except I want them encrypted and performance is BAD. Like 40 to 70MB/s when copying via Samba (depending on which cypher I create the dataset with, no compression or deduplication) and CPU pegged at 100% bad.

I was honestly surprised by this, considering the S905X3 is supposed to have hardware crypto acceleration (and cryptsetup benchmark shows results more in line with what I'd expect), so as a last ditch attempt I used luks to encrypt the drive and created the ZFS pool on top of the mapped block device.
Now Samba copies go at gigbit speed and cpu usage is ~70%.

Is there a reason for such massive discrepancy? I'd really like to keep ZFS as close to the metal as possible, but I cannot give away so much performance - anything else I can attempt to fix?

I could find some reports of similar performance issues with ZFS, but those should have been fixed by now - is this something different (maybe the fix was specific to AES-NI and x86?) pr did the fix not make it to Armbian's ZFS packages?


For reference:
 

cryptsetup benchmark
# Tests are approximate using memory only (no storage IO).
PBKDF2-sha1       477493 iterations per second for 256-bit key
PBKDF2-sha256     878204 iterations per second for 256-bit key
PBKDF2-sha512     447344 iterations per second for 256-bit key
PBKDF2-ripemd160  305529 iterations per second for 256-bit key
PBKDF2-whirlpool  114573 iterations per second for 256-bit key
argon2i       4 iterations, 477274 memory, 4 parallel threads (CPUs) for 256-bit key (requested 2000 ms time)
argon2id      4 iterations, 479560 memory, 4 parallel threads (CPUs) for 256-bit key (requested 2000 ms time)
#     Algorithm |       Key |      Encryption |      Decryption
        aes-cbc        128b       535.0 MiB/s       591.1 MiB/s
    serpent-cbc        128b        44.9 MiB/s        49.5 MiB/s
    twofish-cbc        128b        70.0 MiB/s        74.9 MiB/s
        aes-cbc        256b       464.2 MiB/s       547.2 MiB/s
    serpent-cbc        256b        44.9 MiB/s        49.5 MiB/s
    twofish-cbc        256b        70.0 MiB/s        74.8 MiB/s
        aes-xts        256b       566.8 MiB/s       566.2 MiB/s
    serpent-xts        256b        46.3 MiB/s        49.9 MiB/s
    twofish-xts        256b        73.6 MiB/s        76.0 MiB/s
        aes-xts        512b       526.9 MiB/s       526.4 MiB/s
    serpent-xts        512b        46.3 MiB/s        49.9 MiB/s
    twofish-xts        512b        73.6 MiB/s        76.0 MiB/s
lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda           8:0    0  2.7T  0 disk  
└─sda.luks  251:0    0  2.7T  0 crypt
zpool status tank0
  pool: tank0
 state: ONLINE
config:

   NAME        STATE     READ WRITE CKSUM
   tank0       ONLINE       0     0     0
     sda.luks  ONLINE       0     0     0

errors: No known data errors
sync; dd if=/dev/zero of=/tank0/testfile bs=1M count=1024; sync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 7.21141 s, 149 MB/s

 

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines