gprovost

Members
  • Content Count

    216
  • Joined

  • Last visited

1 Follower

About gprovost

  • Rank
    Elite member

Contact Methods

  • Website URL
    http://kobol.io

Profile Information

  • Gender
    Male
  • Location
    Singapore

Recent Profile Visitors

1194 profile views
  1. Most probably not loaded because no software is calling the AF_ALG API. Again, did you configured OpenSSL to use AF_ALG ?
  2. @Mangix algif_hash and algif_skcipher are already compiled as module, so there aren't missing : Kernel 4.19 https://github.com/armbian/build/blob/master/config/kernel/linux-mvebu-next.config#L5113 Kernel 4.14 https://github.com/armbian/build/blob/master/config/kernel/linux-mvebu-default.config#L5161 Did you configured OpenSSL to offload on AF_ALG ? https://wiki.kobol.io/cesa/#configure-openssl
  3. Couldn't it be the ribbon cables of the fans that touch the fan blades ?
  4. @Jeckyll Maybe you should first double check that fancontrol is properly installed and configured. Refer to this section of our wiki : https://wiki.kobol.io/pwm/#fancontrol-automated-software-based-fan-speed-control
  5. @uiop @devman Well it would be great to cross check first what I reported about BTRFS tools on 32-bit arch. Maybe there have been some improvements done by the BTRFS community. @jimandroidpc Yes L1 and L2 cache is parity protected on Marvell Armada 38x. Well it wouldn't make sense to have ECC RAM with a CPU (SoC) that doesn't have at least cache protected with parity check mechanism.
  6. I just tried from scratch again using with Armbian_5.91_Helios4_Debian_buster_next_4.19.63.7z that I downloaded from our wiki, and it works with the same result than what I reported first. Pretty crazy we don't have the same result. Helios4_Armbian_Buster_ZoL_Install.log I can see there are 2 DKMS issues currently opened on https://github.com/armbian/build/issues Not sure it's related but there might be effectively something to dig in there. I also witnessed some unrelated APT repo issue when doing apt-get update. Had to do it twice as you will see in the file.log, I will rise an issue for that. This was just a clock issue.
  7. @devman https://btrfs.wiki.kernel.org/index.php/Gotchas#8TiB_limit_on_32-bit_systems
  8. It's the integrated memory controller of the SoC that it's in charge of doing the ECC operation not the operating system. It's why it's transparent to the OS but yes when you see it as Enable during U-Boot initialization then it means it is used. I guess I would have to redo some test to cross check the delta you see between the 2 fan setup. But I think @devman make a good point that sensor location might have a significant impact. I think the main point of comparison point should be the HDD temp. Well it's the cool thing about Helios4, it give flexibility to people to fine tune their setup according to their need / environment.
  9. That's a good question and I don't have a details list of partitioning & fs tool behavior. Some tool will prevent, but other will let user create partition bigger than 16TB and the issue will only show up later (e.g during the fs inode table initialization or anything trying to access a block that goes beyond the 16TB region). I must admit that it would be a very useful investigation to test all the use case with the different tool available. As for btrfs, apparently on 32-bit architecture it is recommended to not create volume bigger than 8TB because some brtfs tools might not work properly on 32-bit arch with bigger partition. I'm just reporting what I read online, I haven't tested.
  10. @uiop The 32-bit architecture limitation is actually on Linux page cache which is used by file systems, therefore limiting the max usable size of a partition or logical volume to 16TB. This doesn't stop you to have several partition (or logical volume) of <16TB. Here an example taking in consideration 4x 12TB HDD : If you use mdadm and you setup a RAID6 or RAID10 array, you will have an array of 24TB of usable space . You can then create 2x partition of 12TB or any other combination that would max out the array size, till a each partition doesn't exceed 16TB. If you use lvm and you setup a Volume Group (VG) with the 4 drives (PV), you will get a volume group of 48TB of usable space. You can then create as many (max: 255) Logical Volume (LV) to max out the VG size till each LV doesn't exceed 16TB. Actually a good approach is to use LVM on top of MDADM Raid, this way it gives flexibility to resize your partition, or i should say Logical Volume (LV). After you can of course neither use mdadm or lvm, and just setup individually each disk... till you follow the rules that you can't create a single partition bigger than 16TB, but you can create several partition. Hope it clarifies.
  11. This is the log of a zfs install, setup and test on a fresh Armbian Buster on Helios4, I did yesterday. Note 1 : I set up APT (/etc/apt/preferences.d/90_zfs) to force using backports Note 2 : You will see an error when the zfsutils-linux postinst script try to start zfs-service. Just ignore it, after systemctl daemon-reload and reboot, allow initialize properly. user@helios4:~$ sudo apt-get install linux-headers-next-mvebu Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: linux-headers-next-mvebu 0 upgraded, 1 newly installed, 0 to remove and 44 not upgraded. Need to get 10.6 MB of archives. After this operation, 74.0 MB of additional disk space will be used. Get:1 https://apt.armbian.com buster/main armhf linux-headers-next-mvebu armhf 5.91 [10.6 MB] Fetched 10.6 MB in 15s (726 kB/s) Selecting previously unselected package linux-headers-next-mvebu. (Reading database ... 32967 files and directories currently installed.) Preparing to unpack .../linux-headers-next-mvebu_5.91_armhf.deb ... Unpacking linux-headers-next-mvebu (5.91) ... Setting up linux-headers-next-mvebu (5.91) ... Compiling headers - please wait ... user@helios4:~$ sudo apt-get install zfs-dkms zfsutils-linux Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: dkms file libelf-dev libmagic-mgc libmagic1 libnvpair1linux libuutil1linux libzfs2linux libzpool2linux spl-dkms Suggested packages: python3-apport menu spl nfs-kernel-server samba-common-bin zfs-initramfs | zfs-dracut Recommended packages: fakeroot linux-headers-686-pae | linux-headers-amd64 | linux-headers-generic | linux-headers zfs-zed The following NEW packages will be installed: dkms file libelf-dev libmagic-mgc libmagic1 libnvpair1linux libuutil1linux libzfs2linux libzpool2linux spl-dkms zfs-dkms zfsutils-linux 0 upgraded, 12 newly installed, 0 to remove and 44 not upgraded. Need to get 3424 kB of archives. After this operation, 23.2 MB of additional disk space will be used. Do you want to continue? [Y/n] Get:1 http://cdn-fastly.deb.debian.org/debian buster/main armhf dkms all 2.6.1-4 [74.4 kB] Get:2 http://cdn-fastly.deb.debian.org/debian buster/main armhf libmagic-mgc armhf 1:5.35-4 [242 kB] Get:3 http://cdn-fastly.deb.debian.org/debian buster/main armhf libmagic1 armhf 1:5.35-4 [110 kB] Get:4 http://cdn-fastly.deb.debian.org/debian buster/main armhf file armhf 1:5.35-4 [65.4 kB] Get:5 http://cdn-fastly.deb.debian.org/debian buster/main armhf libelf-dev armhf 0.176-1.1 [69.2 kB] Get:6 http://cdn-fastly.deb.debian.org/debian buster-backports/main armhf spl-dkms all 0.7.13-1~bpo10+1 [400 kB] Get:7 http://cdn-fastly.deb.debian.org/debian buster-backports/contrib armhf zfs-dkms all 0.7.13-1~bpo10+1 [1402 kB] Get:8 http://cdn-fastly.deb.debian.org/debian buster-backports/contrib armhf libuutil1linux armhf 0.7.13-1~bpo10+1 [54.4 kB] Get:9 http://cdn-fastly.deb.debian.org/debian buster-backports/contrib armhf libnvpair1linux armhf 0.7.13-1~bpo10+1 [43.9 kB] Get:10 http://cdn-fastly.deb.debian.org/debian buster-backports/contrib armhf libzpool2linux armhf 0.7.13-1~bpo10+1 [548 kB] Get:11 http://cdn-fastly.deb.debian.org/debian buster-backports/contrib armhf libzfs2linux armhf 0.7.13-1~bpo10+1 [130 kB] Get:12 http://cdn-fastly.deb.debian.org/debian buster-backports/contrib armhf zfsutils-linux armhf 0.7.13-1~bpo10+1 [284 kB] Fetched 3424 kB in 4s (868 kB/s) Preconfiguring packages ... Selecting previously unselected package dkms. (Reading database ... 57533 files and directories currently installed.) Preparing to unpack .../0-dkms_2.6.1-4_all.deb ... Unpacking dkms (2.6.1-4) ... Selecting previously unselected package libmagic-mgc. Preparing to unpack .../1-libmagic-mgc_1%3a5.35-4_armhf.deb ... Unpacking libmagic-mgc (1:5.35-4) ... Selecting previously unselected package libmagic1:armhf. Preparing to unpack .../2-libmagic1_1%3a5.35-4_armhf.deb ... Unpacking libmagic1:armhf (1:5.35-4) ... Selecting previously unselected package file. Preparing to unpack .../3-file_1%3a5.35-4_armhf.deb ... Unpacking file (1:5.35-4) ... Selecting previously unselected package libelf-dev:armhf. Preparing to unpack .../4-libelf-dev_0.176-1.1_armhf.deb ... Unpacking libelf-dev:armhf (0.176-1.1) ... Selecting previously unselected package spl-dkms. Preparing to unpack .../5-spl-dkms_0.7.13-1~bpo10+1_all.deb ... Unpacking spl-dkms (0.7.13-1~bpo10+1) ... Setting up dkms (2.6.1-4) ... Setting up libmagic-mgc (1:5.35-4) ... Setting up libmagic1:armhf (1:5.35-4) ... Setting up file (1:5.35-4) ... Setting up libelf-dev:armhf (0.176-1.1) ... Setting up spl-dkms (0.7.13-1~bpo10+1) ... Loading new spl-0.7.13 DKMS files... Building for 4.19.63-mvebu Building initial module for 4.19.63-mvebu Done. spl.ko: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/4.19.63-mvebu/updates/dkms/ splat.ko: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/4.19.63-mvebu/updates/dkms/ depmod... Warning: The unit file, source configuration file or drop-ins of systemd-modules-load.service changed on disk. Run 'systemctl daemon-reload' to reload units. DKMS: install completed. Selecting previously unselected package zfs-dkms. (Reading database ... 57954 files and directories currently installed.) Preparing to unpack .../0-zfs-dkms_0.7.13-1~bpo10+1_all.deb ... Unpacking zfs-dkms (0.7.13-1~bpo10+1) ... Selecting previously unselected package libuutil1linux. Preparing to unpack .../1-libuutil1linux_0.7.13-1~bpo10+1_armhf.deb ... Unpacking libuutil1linux (0.7.13-1~bpo10+1) ... Selecting previously unselected package libnvpair1linux. Preparing to unpack .../2-libnvpair1linux_0.7.13-1~bpo10+1_armhf.deb ... Unpacking libnvpair1linux (0.7.13-1~bpo10+1) ... Selecting previously unselected package libzpool2linux. Preparing to unpack .../3-libzpool2linux_0.7.13-1~bpo10+1_armhf.deb ... Unpacking libzpool2linux (0.7.13-1~bpo10+1) ... Selecting previously unselected package libzfs2linux. Preparing to unpack .../4-libzfs2linux_0.7.13-1~bpo10+1_armhf.deb ... Unpacking libzfs2linux (0.7.13-1~bpo10+1) ... Selecting previously unselected package zfsutils-linux. Preparing to unpack .../5-zfsutils-linux_0.7.13-1~bpo10+1_armhf.deb ... Unpacking zfsutils-linux (0.7.13-1~bpo10+1) ... Setting up libuutil1linux (0.7.13-1~bpo10+1) ... Setting up zfs-dkms (0.7.13-1~bpo10+1) ... WARNING: Building ZFS module on a 32-bit kernel. Loading new zfs-0.7.13 DKMS files... Building for 4.19.63-mvebu Building initial module for 4.19.63-mvebu Done. zavl.ko: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/4.19.63-mvebu/updates/dkms/ znvpair.ko: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/4.19.63-mvebu/updates/dkms/ zunicode.ko: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/4.19.63-mvebu/updates/dkms/ zcommon.ko: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/4.19.63-mvebu/updates/dkms/ zfs.ko: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/4.19.63-mvebu/updates/dkms/ zpios.ko: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/4.19.63-mvebu/updates/dkms/ icp.ko: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/4.19.63-mvebu/updates/dkms/ depmod... Warning: The unit file, source configuration file or drop-ins of systemd-modules-load.service changed on disk. Run 'systemctl daemon-reload' to reload units. DKMS: install completed. Setting up libnvpair1linux (0.7.13-1~bpo10+1) ... Setting up libzpool2linux (0.7.13-1~bpo10+1) ... Setting up libzfs2linux (0.7.13-1~bpo10+1) ... Setting up zfsutils-linux (0.7.13-1~bpo10+1) ... Created symlink /etc/systemd/system/zfs-import.target.wants/zfs-import-cache.service -> /lib/systemd/system/zfs-import-cache.service. Created symlink /etc/systemd/system/zfs-mount.service.wants/zfs-import.target -> /lib/systemd/system/zfs-import.target. Created symlink /etc/systemd/system/zfs.target.wants/zfs-import.target -> /lib/systemd/system/zfs-import.target. Created symlink /etc/systemd/system/zfs-share.service.wants/zfs-mount.service -> /lib/systemd/system/zfs-mount.service. Created symlink /etc/systemd/system/zfs.target.wants/zfs-mount.service -> /lib/systemd/system/zfs-mount.service. Created symlink /etc/systemd/system/zfs.target.wants/zfs-share.service -> /lib/systemd/system/zfs-share.service. Created symlink /etc/systemd/system/multi-user.target.wants/zfs.target -> /lib/systemd/system/zfs.target. zfs-import-scan.service is a disabled or a static unit, not starting it. Job for zfs-mount.service failed because the control process exited with error code. See "systemctl status zfs-mount.service" and "journalctl -xe" for details. invoke-rc.d: initscript zfs-mount, action "start" failed. * zfs-mount.service - Mount ZFS filesystems Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Mon 2019-09-09 07:39:31 UTC; 22ms ago Docs: man:zfs(8) Process: 7556 ExecStart=/sbin/zfs mount -a (code=exited, status=1/FAILURE) Main PID: 7556 (code=exited, status=1/FAILURE) Sep 09 07:39:21 helios4 systemd[1]: Starting Mount ZFS filesystems... Sep 09 07:39:31 helios4 zfs[7556]: /dev/zfs and /proc/self/mounts are required. Sep 09 07:39:31 helios4 zfs[7556]: Try running 'udevadm trigger' and 'mount -t proc proc /proc' as root. Sep 09 07:39:31 helios4 systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE Sep 09 07:39:31 helios4 systemd[1]: zfs-mount.service: Failed with result 'exit-code'. Sep 09 07:39:31 helios4 systemd[1]: Failed to start Mount ZFS filesystems. dpkg: error processing package zfsutils-linux (--configure): installed zfsutils-linux package post-installation script subprocess returned error exit status 1 Processing triggers for systemd (241-5) ... Processing triggers for man-db (2.8.5-2) ... Processing triggers for libc-bin (2.28-10) ... Errors were encountered while processing: zfsutils-linux E: Sub-process /usr/bin/dpkg returned an error code (1) user@helios4:~$ sudo systemctl daemon-reload user@helios4:~$ sudo reboot $ ssh 10.10.10.1 user@10.10.10.1's password: _ _ _ _ _ _ | | | | ___| (_) ___ ___| || | | |_| |/ _ \ | |/ _ \/ __| || |_ | _ | __/ | | (_) \__ \__ _| |_| |_|\___|_|_|\___/|___/ |_| Welcome to Debian Buster with Armbian Linux 4.19.63-mvebu System load: 1.11 0.30 0.10 Up time: 0 min Memory usage: 4 % of 2015MB IP: 10.10.10.1 CPU temp: 55°C Ambient temp: 36°C Usage of /: 10% of 15G [ 0 security updates available, 44 updates total: apt upgrade ] Last check: 2019-09-09 07:41 Last login: Mon Sep 9 05:16:46 2019 from 10.10.10.254 user@helios4:~$ systemctl status zfs-mount.service * zfs-mount.service - Mount ZFS filesystems Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled) Active: active (exited) since Mon 2019-09-09 07:41:22 UTC; 1min 44s ago Docs: man:zfs(8) Process: 412 ExecStart=/sbin/zfs mount -a (code=exited, status=0/SUCCESS) Main PID: 412 (code=exited, status=0/SUCCESS) Sep 09 07:41:21 helios4 systemd[1]: Starting Mount ZFS filesystems... Sep 09 07:41:22 helios4 systemd[1]: Started Mount ZFS filesystems. user@helios4:~$ sudo zpool create stock mirror /dev/sda /dev/sdb mirror /dev/sdc /dev/sdd user@helios4:~$ sudo zpool status pool: stock state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM stock ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sda ONLINE 0 0 0 sdb ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 sdc ONLINE 0 0 0 sdd ONLINE 0 0 0 errors: No known data errors user@helios4:~$ sudo zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT stock 222G 148K 222G - 0% 0% 1.00x ONLINE - user@helios4:~$ sudo zfs set dedup=off stock user@helios4:~$ sudo zfs create stock/stuff user@helios4:~$ sudo dd if=/dev/zero of=/stock/stuff/test.data bs=1M count=4000 4000+0 records in 4000+0 records out 4194304000 bytes (4.2 GB, 3.9 GiB) copied, 47.5516 s, 88.2 MB/s user@helios4:~$ sudo zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT stock 222G 3.91G 218G - 0% 1% 1.00x ONLINE - user@helios4:~$ sudo dd if=/stock/stuff/test.data of=/dev/null bs=1M 4000+0 records in 4000+0 records out 4194304000 bytes (4.2 GB, 3.9 GiB) copied, 18.4318 s, 228 MB/s
  12. @SvenHz First you should purge any zfs packages to try from scratch again. Then if you want to install from backports you need configure apt to use backports for all the ZFS dependencies. Create file /etc/apt/preferences.d/90_zfs with following content : Package: libnvpair1linux libuutil1linux libzfs2linux libzpool2linux spl-dkms zfs-dkms zfs-test zfsutils-linux zfsutils-linux-dev zfs-zed Pin: release n=buster-backports Pin-Priority: 990 Then redo the install, you won't need to specify "-t buster-backports" in your apt command. I also tested on Buster yesterday and it works without issue.
  13. You don't have to wait, using apt-get upgrade will update your system to the same packages version (or even more recent) than the latest Debian Release image. See Debian Release update as just a new image built with the latest packages available.
  14. Actually that's a good question, even though is not perfect, I would say that Softy is the right location for now. @Igor what do you think ?
  15. @SvenHz Actually first install the package linux-headers-next-mvebu, then install zfs-dkms zfsutils-linux