Jump to content

gprovost

Members
  • Posts

    580
  • Joined

  • Last visited

Reputation Activity

  1. Like
    gprovost reacted to aprayoga in 2.5G Ethernet crash (r8152)   
    @FloBaoti you can get 2.14 source from https://github.com/wget/realtek-r8152-linux
    and compile using docker similar like in this thread
  2. Like
    gprovost reacted to Demodude123 in 2.5Gbps port, or USB 1Gbps Startech unlink during transfers   
    I think I had a double issue. I have a realtek card from the same family in the desktop I am testing. I think my desktop needs the https://github.com/igorpecovnik/realtek-r8152-linux patch.
     
    For now, I've switched cables around so I can use the 2.5Gps port at 100MBps at our home lan, and the 1Gps port directly into my motherboard port. The 2.5G card in the desktop is a pcie card.
     
    I'll keep it this way until the TX checksum offload issue is resolved. Thanks for your help.
  3. Like
    gprovost reacted to NotAnExpert in Missing RAID array after power loss   
    This is the output:
     
    mdadm --examine /dev/sd[abcde]
     
    /dev/sda:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x1
         Array UUID : f2188013:e3cdc6dd:c8a55f0d:d0e9c602
               Name : helios64:0  (local to host helios64)
      Creation Time : Thu Nov  5 22:00:27 2020
         Raid Level : raid6
       Raid Devices : 5
     
    Avail Dev Size : 23437506560 (11175.87 GiB 12000.00 GB)
         Array Size : 35156259840 (33527.62 GiB 36000.01 GB)
        Data Offset : 264192 sectors
       Super Offset : 8 sectors
       Unused Space : before=264112 sectors, after=0 sectors
              State : clean
        Device UUID : ae15b2cf:d55f6fcf:1e4ef3c5:1824b2c7
     
    Internal Bitmap : 8 sectors from superblock
        Update Time : Thu Nov 26 07:13:29 2020
      Bad Block Log : 512 entries available at offset 56 sectors
           Checksum : 1e6baa82 - correct
             Events : 26352
     
             Layout : left-symmetric
         Chunk Size : 512K
     
       Device Role : Active device 0
       Array State : AAA.. ('A' == active, '.' == missing, 'R' == replacing)
    /dev/sdb:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x1
         Array UUID : f2188013:e3cdc6dd:c8a55f0d:d0e9c602
               Name : helios64:0  (local to host helios64)
      Creation Time : Thu Nov  5 22:00:27 2020
         Raid Level : raid6
       Raid Devices : 5
     
    Avail Dev Size : 23437506560 (11175.87 GiB 12000.00 GB)
         Array Size : 35156259840 (33527.62 GiB 36000.01 GB)
        Data Offset : 264192 sectors
       Super Offset : 8 sectors
       Unused Space : before=264112 sectors, after=0 sectors
              State : clean
        Device UUID : e6c4d356:a1021369:2fcd3661:deaa4406
     
    Internal Bitmap : 8 sectors from superblock
        Update Time : Wed Nov 25 23:33:03 2020
      Bad Block Log : 512 entries available at offset 56 sectors
           Checksum : 1909966b - correct
             Events : 26341
     
             Layout : left-symmetric
         Chunk Size : 512K
     
       Device Role : Active device 1
       Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)
    /dev/sdc:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x1
         Array UUID : f2188013:e3cdc6dd:c8a55f0d:d0e9c602
               Name : helios64:0  (local to host helios64)
      Creation Time : Thu Nov  5 22:00:27 2020
         Raid Level : raid6
       Raid Devices : 5
     
    Avail Dev Size : 23437506560 (11175.87 GiB 12000.00 GB)
         Array Size : 35156259840 (33527.62 GiB 36000.01 GB)
        Data Offset : 264192 sectors
       Super Offset : 8 sectors
       Unused Space : before=264112 sectors, after=0 sectors
              State : clean
        Device UUID : 31f197dd:6f6d0545:fbc627a6:53582fd4
     
    Internal Bitmap : 8 sectors from superblock
        Update Time : Thu Nov 26 07:13:29 2020
      Bad Block Log : 512 entries available at offset 56 sectors
           Checksum : 8e9940b8 - correct
             Events : 26352
     
             Layout : left-symmetric
         Chunk Size : 512K
     
       Device Role : Active device 2
       Array State : AAA.. ('A' == active, '.' == missing, 'R' == replacing)
    /dev/sdd:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x1
         Array UUID : f2188013:e3cdc6dd:c8a55f0d:d0e9c602
               Name : helios64:0  (local to host helios64)
      Creation Time : Thu Nov  5 22:00:27 2020
         Raid Level : raid6
       Raid Devices : 5
     
    Avail Dev Size : 23437506560 (11175.87 GiB 12000.00 GB)
         Array Size : 35156259840 (33527.62 GiB 36000.01 GB)
        Data Offset : 264192 sectors
       Super Offset : 8 sectors
       Unused Space : before=264112 sectors, after=0 sectors
              State : clean
        Device UUID : 3296b043:2025da1d:edc6e9eb:99e57e34
     
    Internal Bitmap : 8 sectors from superblock
        Update Time : Wed Nov 25 23:33:03 2020
      Bad Block Log : 512 entries available at offset 56 sectors
           Checksum : 739abeb1 - correct
             Events : 26341
     
             Layout : left-symmetric
         Chunk Size : 512K
     
       Device Role : Active device 3
       Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)
    /dev/sde:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x1
         Array UUID : f2188013:e3cdc6dd:c8a55f0d:d0e9c602
               Name : helios64:0  (local to host helios64)
      Creation Time : Thu Nov  5 22:00:27 2020
         Raid Level : raid6
       Raid Devices : 5
     
    Avail Dev Size : 23437506560 (11175.87 GiB 12000.00 GB)
         Array Size : 35156259840 (33527.62 GiB 36000.01 GB)
        Data Offset : 264192 sectors
       Super Offset : 8 sectors
       Unused Space : before=264112 sectors, after=0 sectors
              State : clean
        Device UUID : ecf702ec:8dec71e2:a7f465fa:86b78694
     
    Internal Bitmap : 8 sectors from superblock
        Update Time : Wed Nov 25 23:33:03 2020
      Bad Block Log : 512 entries available at offset 56 sectors
           Checksum : 4f08e782 - correct
             Events : 26341
     
             Layout : left-symmetric
         Chunk Size : 512K
     
       Device Role : Active device 4
       Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)
     
    =======
     
    Regarding what I tried
     
    When I noticed I couldn’t access the files and folders on my computer I tried other computers. Then I went to OMV front end and changed the workgroup thinking it was just a Windows issue and when that didn’t work I removed the shares from Samba, did not delete them from Access Right Management > Shared Folders.
     
    At that point I realized the filesystem was missing and mounting was disabled. I did nothing else either in OPV web interface or through SSH commands except for the info and commands you have requested.
     
    Thank you!
  4. Like
    gprovost reacted to djurny in Helios64 - freeze whatever the kernel is.   
    Hi,
    I've also experienced almost hourly instabilities when running some load on my Helios64 box. Tried several kernels, each with their own Oops/BUG pattern. See below for an overview:
     
     
    It's not exhaustive; in the end I did the following and the box is now running some load (snapraid scrub on ~12TiB of data) without any issue:
    Enabled daily built kernel, now running Linux kobol0 5.9.11-rockchip64 #trunk.2 SMP PREEMPT Sun Nov 29 00:29:16 CET 2020 aarch64 GNU/Linux.
    Why: Every kernel had their own pattern, either do_undefinstr or XHCI hangup or page fault. Assumed latest greatest has most fixes. Enabled the i2c dtb overlays.
    Why: Some of the kernels showed some IRQ related to i2c in the Oops/BUG. Thought I find something in the dtb related to i2c and just enable it to see if that might fix something. Moved rootfs from USB stick to SATA SSD in slot4.
    Why: Some of the kernels had a repeated hanging XHCI controller, so I tried to remove some USB devices from the controller, to see if the amount of load on the controller itself might be a vector (, Victor). Also removed tlp and set SATA link power management to max_performance (hat tip @gprovost). It's a weak investigation, as I fiddled with multiple things at once, trying to get things going quickly (I do not have much spare time to spend on this as I would like to). Still, perhaps this will trigger someone or give some more angles to fiddle with for others.
    Fingers crossed.
     
    Looking good so far:
    djurny@kobol0:~$ uname -a Linux kobol0 5.9.11-rockchip64 #trunk.2 SMP PREEMPT Sun Nov 29 00:29:16 CET 2020 aarch64 GNU/Linux djurny@kobol0:~$ uptime 07:26:58 up 2 days, 10:40, 7 users, load average: 1.73, 1.76, 1.74 djurny@kobol0:~$ (The box has been running rdfind, xfs_fsr, snapraid scrub & check for the last 2 days (in that order).)
     
    Groetjes,
  5. Like
    gprovost reacted to Igor in ZFS on Helios64   
    I added zfs zfs-dkms_0.8.5-2~20.04.york0_all to our repository so at least Ubuntu version works OOB (apt install zfs-dkms). I tested installation on Odroid N2+ running 5.9.11

    * packaging mirrors need some time to receive updates so you might get some file-not-found, but it will work tomorrow. Headers must be installed from armbian-config.
     
     
  6. Like
    gprovost reacted to tjay in SSD Unexpected_Power_Loss / POR_Recovery_Count on Reboot   
    Inspired by this topic https://forum.odroid.com/viewtopic.php?t=29069 I installed a shutdown script in /lib/systemd/system-shutdown:
    #!/bin/bash case "$1" in kexec) # Do not park disks when switching kernels. ;; *) for disk in /sys/block/sd* ; do echo 1 > /sys/class/block/${disk##*/}/device/delete done sleep 1 ;; esac this works for my needs.
     
    I don’t know if this problem is Helios64 specific, if not please move this topic to armbian common issues..
  7. Like
    gprovost reacted to ShadowDance in How-to start fans during early boot   
    If you want the fans of your Helios64 to start spinning earlier, look no further. This will allow the fans to spin at a constant speed from the earliest stage of initrd until fancontrol is started by the system. My use-case is for full disk encryption and recovery in initramfs, sometimes I can have the machine powered on for quite a while before fancontrol starts. Or, if the boot process encounters an error the fans may never start, this prevents that.
     
    Step 1: Tell initramfs to include the pwm-fan module
    echo pwm-fan | sudo tee -a /etc/initramfs-tools/modules  
    Step 2: Create a new file at /etc/initramfs-tools/scripts/init-top/fan with the following contents:
    #!/bin/sh PREREQ="" prereqs() { echo "$PREREQ" } case $1 in prereqs) prereqs exit 0 ;; esac . /scripts/functions modprobe pwm-fan for pwm in /sys/devices/platform/p*-fan/hwmon/hwmon*/pwm1; do echo 150 >$pwm done exit 0 Feel free to change the value 150 anywhere between 1-255, it decides how fast the fans will spin.
     
    Step 3: Enable the fan script for inclusion in initramfs and update the initramfs image:
    sudo chmod +x /etc/initramfs-tools/scripts/init-top/fan sudo update-initramfs -u -k all  
    Step 4: Reboot and ejoy the cool breeze.
  8. Like
    gprovost reacted to ebin-dev in HomeAutomation on Helios64 / Armbian   
    Some of you may be interested to not only store your data on Helios64 but to also run other services on it - such as home automation. One of the popular platforms for home automation in Europe - and in Germany in particular - is "Homematic". They opened up their ecosystem some time ago and allow others to implement their own components (some excellent examples from Jérôme are here). 
     
    As of yesterday - with the latest commit - it is possible to run a virtualised "ccu3" base station for that platform, called pivccu, with support for a detached antenna module (RPI-RF-MOD) coupled to Helios64 (and Armbian 20.11 kernel 5.9.10) through a tcp/ip network using a dedicated circuit board developed by Alex Reinert (HB-RF-ETH). All of these projects are in the open domain. Actually not only Helios64 but also most of the SBCs running Armbian and a recent mainline kernel should now also be supported.
     
    There is a logic layer built into the pivccu system, but actions can also easily be automated using i.e. Node-Red or OpenHab2.
     
    Just to let you know: If you don't want to build stuff yourself, components could be purchased in Germany from i.e. elv and smartkram.
     
  9. Like
    gprovost reacted to Igor in Helios64 - freeze whatever the kernel is.   
    As @grek already pointed out - use / try with most recent version. I think that kernel freeze was just a temporal workaround and its not valid anymore.
  10. Like
    gprovost reacted to grek in Helios64 - freeze whatever the kernel is.   
    Did You tried to jump armbian 20.11 and kernel 5.9. 10. ?
    Currently Im using 5.9.10 begin friday without any issues. Before it 5.9.9? and i successfully copied more than 2 TB data. (zfs - raid1)
    Also i ran docker and compiled few times ZFS from sources . The load was big but i don't see any issues. 
     
  11. Like
    gprovost reacted to m11k in HOWTO: btrfs root filesystem   
    I gave this method a try, and it worked successfully without any of the btrfs "no space left on device" errors.  The process was largely the same, except instead of the eMMC device being /dev/sdb, it was /dev/mmcblk1.  I also mounted the storage array and used that to hold the tarball of the old root filesystem.
  12. Like
    gprovost got a reaction from 0x349850341010010101010100 in ZFS on Helios64   
    Yeah definitely the topic deserved a dedicated thread, even a page on our wiki if someone feel like contributing https://github.com/kobol-io/wiki
     
    IMHO zfs-dkms would the easiest approach for most people.
  13. Like
    gprovost got a reaction from TheLinuxBug in Random system freezes   
    This is a PSU for only a 2-bay setup (I think only rated 5A ??). Could be not sufficient for a 4 x 3.5" HDD.
     
     
    @TheLinuxBug The driver you are referring to is mv_xor (mv_cesa is for the hw crypto engine), mv_xor is already built-in Armbian for mvebu family
     
    A way to confirm Helios4 offload XOR operation on the hw engine is to check /proc/interrupts and look at the following 2 lines
     
    47: 43007 0 GIC-0 54 Level f1060800.xor 48: 40208 0 GIC-0 97 Level f1060900.xor  
    You right about NFS that has been often a culprit of bringing system down to his feet when not properly tuned.
     
     
    @karamike Your log looks more like overload system. What protocol are you using to access data over the network ?
     
    @Mangix Hope you manage to narrow down the root cause
  14. Like
    gprovost reacted to grek in ZFS on Helios64   
    Hey,
    I think a dedicated topic for ZFS on Helios 64 is needed. Many people want to have it
     
    As we know, and many of us playing with dev builds for helios ,it is no easy to work with ZFS 
    I wrote few scripts maybe someone of You it can help in someway. 
    Thanks @jbergler and @ShadowDance , I used your idea for complete it.
    The problem is with playing with OMV and ZFS plugin. then dependencies remove our latest version of ZFS. I tested. everything survived but ZFS is downgraded to latest version from repo 
    for example:
     
    root@helios64:~# zfs --version
    zfs-0.8.4-2~bpo10+1
    zfs-kmod-2.0.0-rc6
    root@helios64:~# uname -a
    Linux helios64 5.9.11-rockchip64 #trunk.1 SMP PREEMPT Thu Nov 26 01:32:45 CET 2020 aarch64 GNU/Linux
    root@helios64:~#
     
    I tested it with kernel 5.9.10 and with today clean install with 5.9.11
     
    First we need to have docker installed ( armbian-config -> software -> softy -> docker )
     
    Create dedicated directory with Dockerfile ( we will customize ubuntu:bionic image with required libraries and gcc 10 . in next builds we can skip this step and just customize and run build-zfs.sh script) 
    mkdir zfs-builder
    cd zfs-builder
    vi Dockerfile
     
    FROM ubuntu:bionic RUN apt update; apt install build-essential autoconf automake bison flex libtool gawk alien fakeroot dkms libblkid-dev uuid-dev libudev-dev libssl-dev zlib1g-dev libaio-dev libattr1-dev libelf-dev python3 python3-dev python3-setuptools python3-cffi libffi-dev -y; apt install software-properties-common -y; add-apt-repository ppa:ubuntu-toolchain-r/test; apt install gcc-10 g++-10 -y; update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-10 10; update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-10 10  
    Build docker image for building purposes.
     
    docker build --tag zfs-build-ubuntu-bionic:0.1 .  
    Create script for ZFS build
     
    vi build-zfs.sh
     
    #!/bin/bash #define zfs version zfsver="zfs-2.0.0-rc6" #creating building directory mkdir /tmp/zfs-builds && cd "$_" rm -rf /tmp/zfs-builds/inside_zfs.sh apt-get download linux-headers-current-rockchip64 git clone -b $zfsver https://github.com/openzfs/zfs.git $zfsver-$(uname -r) #create file to execute inside container echo "creating file to execute it inside container" cat > /tmp/zfs-builds/inside_zfs.sh <<EOF #!/bin/bash cd scratch/ dpkg -i linux-headers-current-*.deb zfsver="zfs-2.0.0-rc6" cd "/scratch/$zfsver-$(uname -r)" sh autogen.sh ./configure make -s -j$(nproc) make deb mkdir "/scratch/deb-$zfsver-$(uname -r)" cp *.deb "/scratch/deb-$zfsver-$(uname -r)" rm -rf "/scratch/$zfsver-$(uname -r)" exit EOF chmod +x /tmp/zfs-builds/inside_zfs.sh echo "" echo "####################" echo "starting container.." echo "####################" echo "" docker run --rm -it -v /tmp/zfs-builds:/scratch zfs-build-ubuntu-bionic:0.1 /bin/bash /scratch/inside_zfs.sh # Cleanup packages (if installed). modprobe -r zfs zunicode zzstd zlua zcommon znvpair zavl icp spl apt remove --yes zfsutils-linux zfs-zed zfs-initramfs apt autoremove --yes dpkg -i "/tmp/zfs-builds/deb-$zfsver-$(uname -r)"/kmod-zfs-$(uname -r)*.deb dpkg -i "/tmp/zfs-builds/deb-$zfsver-$(uname -r)"/{libnvpair1,libuutil1,libzfs2,libzpool2,python3-pyzfs,zfs}_*.deb echo "" echo "###################" echo "building complete" echo "###################" echo ""  
    chmod +x build-zfs.sh
    screen -L -Logfile buildlog.txt ./build-zfs-final.sh
     
     
     
     
     
  15. Like
    gprovost reacted to tmb in First Start – No Console Output   
    Hi again,
     
    Yes, the lights were solid blue after boot (power and hdds, nothing else), so the comment about not getting a clean boot make sense.
     
    I was monitoring the router but no ip was coming up for it, so this again makes sense as it wasn’t really booting with the sd card.
     
    Here’s what actually worked last evening (don’t remember the exact order):
    -          Changed the sd card image from Armbian_20.08.13_Helios4_buster_current_5.8.16.img to Armbian_20.11_Helios64_buster_current_5.9.10.img (yes, using Etcher), initially I was using the older image for what seemed to be stability reasons…
    -          Before burning the image again - reformatted the card from exFAT to FAT in Windows 10 (as mentioned, not really much experience with Linux so this might have been the cause)
    -          Moved the device from the left side of my desk to the right
     
    It powered up and connected via usb - COM6 with my desktop as I was checking anyway every time before trying (for example desktop was always COM6, laptop always COM3).
     
    Looks fine now, power light on, system light blinking, no hdd lights (as I don’t have any yet).
     
    Even installed OMV which took about 20 mins, but I am able to access it.
     
    Now off to buying disks, fun times ahead
    Thanks for your time, people!
  16. Like
    gprovost reacted to m11k in HOWTO: btrfs root filesystem   
    Sure, I'd be happy to contribute this.  Rewriting it in markdown would be a pleasure, as I found all the clicking required in the forum a little frustrating for a long post.  When I get a few hours free, I'd like to try this process again using the helios itself booted to armbian on sd-card to do the conversion.  That way a second linux system wouldn't be required, and it might fix the random ENOSPC errors I encountered.
  17. Like
    gprovost reacted to Heisath in Random system reboots   
    Just in case you misunderstood: I do not want to send any PR to OpenWRT.
     
    I am only asking you (as you have a reliable way of crashing) to test once without any DFS patches at all. And once with the OpenWRT DFS patches, which I added to AR-526.
  18. Like
    gprovost reacted to m11k in HOWTO: btrfs root filesystem   
    Hi,
     
    I wanted to convert my root filesystem to btrfs.  This allows me to use snapper to take regular snapshops.  I also use btrbk to send my snapshots to my secondary storage (also a btrfs array).  My root filesystems is installed to eMMC, but there isn't any reason you couldn't do this with an sd card.  These instructions are done with a laptop running Linux.  I'm using Debian in this case, although any distro with a recent kernel and btrfs tools installed should work.
     
    CAVEATS:
    The current u-boot image (2020.08.21) doesn't support booting natively to btrfs.  u-boot *does* support btrfs though.  Hopefully a future build of u-boot will add btrfs support.  To work around this, we have to create an ext4 /boot partition, and then a btrfs root partition.
     
    1. To start, I booted the helios64 via sd card to the USB mass storage image (https://wiki.kobol.io/helios64/install/emmc/).  I then connected a laptop to the helios64 via the USB-C cable, and the emmc appeared as /dev/sdb.  The rest of these instructions will assume the eMMC is /dev/sdb, so please adjust accordingly for your system.
     
    2. Mount the eMMC root filesystem on the host:
    # mkdir /mnt/nasrootext4 # mount /dev/sdb1 -o ro /mnt/nasrootext4  
    3. Back up the contents of the root filesystem:
    # tar -C /mnt/nasrootext4 --acls --xattrs -cf root_backup.tar .  
    4. Make a backup of the partition layout (in case you decide to revert this modification):
    # fdisk -l /dev/sdb | tee nas_fdisk.txt  
    5. Unmount the filesystem, and run fdisk on it.  Make note of the start sector.  It should be 32768.
    # umount /dev/sdb1 # fdisk /dev/sdb Command (m for help): p Disk /dev/sdb: 14.6 GiB, 15634268160 bytes, 30535680 sectors Disk model: UMS disk 0 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x794023b5 Device Boot Start End Sectors Size Id Type /dev/sdb1 32768 30230303 30197536 14.4G 83 Linux  
    6. Delete the existing partition ('d')
    7. Create a new partition ('n')
    8. Select 'primary' partition type
    9. Select partition number 1
    10. Select first sector: 32768 (This is very important to get right, and should match the start sector of the partition that was just deleted)
    11. Select last sector: +512M (this create a 512MB /boot partition)
    12. You will see a message "Partition #1 contains a ext4 signature.".  This is okay.  I selected 'n' (don't remove signature) but it doesn't matter.  You should probably remove the signature since we're going to create a new ext4 filesystem anyway.
    13. Create new partition ('n')
    14. Select 'primary' partition type
    15. Select partition number 2
    16. First sector: 1081344 (this is one more than the end sector of the other partition)
    17. Last sector: (leave the default as the full drive, 30535679 in my case)
    18. Print the layout to confirm:
    Disk /dev/sdb: 14.6 GiB, 15634268160 bytes, 30535680 sectors Disk model: UMS disk 0 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x794023b5 Device Boot Start End Sectors Size Id Type /dev/sdb1 32768 1081343 1048576 512M 83 Linux /dev/sdb2 1081344 30535679 29454336 14G 83 Linux  
    19. Write the partition table to disk and exit ('w')
    20. Create a new ext4 filesystem on the first partition: 'mkfs.ext4 /dev/sdb1' (you may need to add -F if it complains about an existing ext4 signature)
    21. Create a new btrfs filesystem on the second partition: 'mkfs.btrfs /dev/sdb2'
    22. Mount the new root filesystem:
    # mkdir /mnt/nasrootbtrfs # sudo mount -o compress=zstd /dev/sdb2 /mnt/nasrootbtrfs  
    23. Make some initial subvolumes (note that '@' will be the root subvolume, as this is preferred by 'snapper'):
    # cd /mnt/nasrootbtrfs # btrfs subvol create @ # btrfs subvol create @home # btrfs subvol create @var_log  
    24. Mount the boot partition, and the @home and @var_log subvolumes
    # cd @ # mkdir boot # mount /dev/sdb1 boot/ # mkdir home # mount /dev/sdb2 -o noatime,compress=zstd,subvol=/@home home # mkdir -p var/log # mount /dev/sdb2 -o noatime,compress=zstd,subvol=/@var_log var/log # mount | grep sdb /dev/sdb2 on /mnt/nasrootbtrfs type btrfs (rw,relatime,compress=zstd,space_cache,subvolid=5,subvol=/) /dev/sdb1 on /mnt/nasrootbtrfs/@/boot type ext4 (rw,relatime) /dev/sdb2 on /mnt/nasrootbtrfs/@/home type btrfs (rw,noatime,compress=zstd,space_cache,subvolid=258,subvol=/@home) /dev/sdb2 on /mnt/nasrootbtrfs/@/var/log type btrfs (rw,noatime,compress=zstd,space_cache,subvolid=260,subvol=/@var_log)  
    25. Note that our current directory is '/mnt/nasrootbtrfs/@'
    26. Untar the old root filesystem into the new root directory.  Since boot, home, and var/log are mounted, tar will extract the relevant data into these mounted filesystems.
    # tar -xvf /root/helios64_root.tar --acls --xattrs --numeric-owner |@ tee /root/extract_log.txt This is the only spot where things did not go as expected.  I received a handful of errors about "No space left on device" while untarring.  These seem to be transient btrfs errors. I use btrfs pretty extensively at work, and I've seen cases in the past on older kernels where btrfs would falsely report ENOSPC when the filesystem was under heavy load.  I'm not sure if that is what is happening here, but I would guess it's a bug.  It could be a bug in the kernel on my laptop, or it could be a bug in the usb-c mass storage export on the helios64.  To recover from this, I ran the tar command a couple of times.  I continued to receive these errors on different files.  I did this three or four times.  I also tried removing --acls and --xattrs, but it didn't seem to make much of a difference.  On my last attempt to untar, all of the files that failed happened to be from usr/lib/python3, so I untarred just that directory: 'tar -xvf /root/helios64_root.tar --numeric-owner ./usr/lib/python3'.  That extracted without error.  I was satisfied that everything made it back on disk.  Perhaps if you only extracted one top-level directory at a time, it would not produce any errors.  Maybe there is some way to slow down the 'tar' to avoid these ENOSPC errors.
     
    27. After extracting the root filesystem, I made a snapshot of it: 'btrfs subvol snap -r . root_post_copy'
    28. Update /mnt/nasrootbtrfs/@/etc/fstab.  This was my old fstab:
    Old version: UUID=dabb3dbf-8631-4051-a032-c0b97eb285bd / ext4 defaults,noatime,nodiratime,commit=600,errors=remount-ro 0 1 tmpfs /tmp tmpfs defaults,nosuid 0 0 # >>> [openmediavault] /dev/disk/by-label/helios64_btrfs_raid1 /srv/dev-disk-by-label-helios64_btrfs_raid1 btrfs defaults,nofail,noatime,nodiratime,compress=zstd 0 2 # <<< [openmediavault]  
    Look up the new UUID for /dev/sdb2.  I did this by running 'ls -l /dev/disk/by-uuid').  Updated /etc/fstab:
    UUID=240bab7d-1b6c-48f4-898d-ba12abcecb3f / btrfs defaults,noatime,compress=zstd,ssd,subvol=@ 0 0 UUID=240bab7d-1b6c-48f4-898d-ba12abcecb3f /home btrfs defaults,noatime,compress=zstd,ssd,subvol=@home 0 0 UUID=240bab7d-1b6c-48f4-898d-ba12abcecb3f /var/log/ btrfs defaults,noatime,compress=zstd,ssd,subvol=@var_log 0 0 UUID=fd5b620c-57e2-4031-b56c-3c64ce7c7d5f /boot ext4 defaults,noatime 0 2 tmpfs /tmp tmpfs defaults,nosuid 0 0 # >>> [openmediavault] /dev/disk/by-label/helios64_btrfs_raid1 /srv/dev-disk-by-label-helios64_btrfs_raid1 btrfs defaults,nofail,noatime,nodiratime,compress=zstd 0 2 # <<< [openmediavault]  
    29. Modify /mnt/nasrootbtrfs/@/boot/armbianEnv.txt.  In particular modify rootdev, rootfstype, and add 'extraargs'.  Old version:
    verbosity=1 bootlogo=false overlay_prefix=rockchip rootdev=UUID=dabb3dbf-8631-4051-a032-c0b97eb285bd rootfstype=ext4 usbstoragequirks=0x2537:0x1066:u,0x2537:0x1068:u  
    New version:
    verbosity=1 bootlogo=false overlay_prefix=rockchip rootdev=UUID=240bab7d-1b6c-48f4-898d-ba12abcecb3f rootfstype=btrfs usbstoragequirks=0x2537:0x1066:u,0x2537:0x1068:u extraargs=rootflags=subvol=@  
    30. Make a 'boot' symlink.  I'm not sure if this is necessary, but u-boot expects the boot files to be in the '/boot' directory on the root filesystem, but they are now in the root directory of the boot partition.  This symlink makes sure that if u-boot tries to access /boot/foo, it will actually access /foo.
    # cd /mnt/nasrootbtrfs/@/boot # ln -s . boot  
    31.  Unmount everything:
    # umount /mnt/nasrootbtrfs/@/var/log # umount /mnt/nasrootbtrfs/@/home # umount /mnt/nasrootbtrfs/@/boot # umount /mnt/nasrootbtrfs/  
    32. Power off helios64
    33. Remove sd card
    34. Start picocom on your laptop ('picocom -b 1500000 /dev/ttyUSB0')
    35. Turn on helios 64 and look for startup.
     
    TROUBLESHOOTING:
    Of course this did not work on the first attempt for me.  After rebooting the helios64, it never successfully booted into Linux.  I didn't see any errors from u-boot.  To troubleshoot, I wrote the original helios64 debian buster image to sd card and booted the helios64 to that.  I mounted /dev/mmcblk1p1 to /mnt, and changed the 'verbosity' to 7 in armbianEnv.txt.  I eventually discovered that I had the UUID wrong in /boot/armbianEnv.txt.  Correcting that value fixed the issue, and the helios64 booted successfully from eMMC.
     
    SETTING UP AUTOMATED SNAPSHOTS AND BACKUPS:
    Enabling 'snapper' for automated root snapshots was very easy:
    1. sudo apt install snapper
    2. sudo snapper -c root create-config /
     
    The debian snapper automatically take snapshots before and after apt upgrades.  However since /boot is a separate filesystem, this is not included in these snapshots.  As a workaround, I added an apt hook which rsyncs /boot to /.bootbackup before the snapshots are made.  Note that the 'snapper' hook is /etc/apt/apt.conf.d/80snapper, so I created /etc/apt/apt.conf.d/79-snapper-boot-backup:
    # /etc/apt/apt.conf.d/79-snapper-boot-backup DPkg::Pre-Invoke { "if mountpoint -q /boot && [ -e /.bootbackup ] ; then rsync -a --delete /boot /.bootbackup || true ; fi"; }; DPkg::Post-Invoke { "if mountpoint -q /boot && [ -e /.bootbackup ] ; then rsync -a --delete /boot /.bootbackup || true ; fi"; };  
    This takes care of automated snapshots.  I still want snapshots stored on another drive (or another system) for redundancy.  I'm using 'btrbk' to do this.  btrbk wants the btrfs top-level subvolume mounted somewhere.  To do so, I created the directory /mnt/btr_pool, and added the following line to /etc/fstab:
    UUID=240bab7d-1b6c-48f4-898d-ba12abcecb3f /mnt/btr_pool/ btrfs defaults,noatime,compress=zstd,ssd,subvolid=5 0 0  
    My spinning drive array on the helios64 is a btrfs filesystem, and I have it mounted on /srv/dev-disk-by-label-helios64_btrfs_raid1.  I created a '/srv/.../backups/helios64/btrfs/volumes' directory.  I then created the following three files:
     
    /srv/dev-disk-by-label-helios64_btrfs_raid1/backups/helios64/btrfs/btrbk.conf:
    timestamp_format long volume /mnt/btr_pool snapshot_dir btrbk snapshot_preserve 30d snapshot_preserve_min latest target_preserve 7d 8w 6m target_preserve_min latest target send-receive /srv/dev-disk-by-label-helios64_btrfs_raid1/backups/helios64/btrfs/volumes subvolume @ subvolume @home subvolume @var_log  
    /srv/dev-disk-by-label-helios64_btrfs_raid1/backups/helios64/btrfs/rsync_boot.sh:
    #!/bin/sh set -e -x mountpoint /boot [ -e /.bootbackup ] rsync -a --delete /boot /.bootbackup  
    /srv/dev-disk-by-label-helios64_btrfs_raid1/backups/helios64/btrfs/run.sh
    #!/bin/bash set -e -x CDIR=$( dirname ${BASH_SOURCE[0]} ) ${CDIR}/rsync_boot.sh btrbk -c ${CDIR}/btrbk.conf run  
    I then added a daily cron job which runs the 'run.sh' script.
     
    Well, that was a longer post than I had imagined.  I hope someone finds it helpful.
  19. Like
    gprovost got a reaction from Zageron in Network Performance on Armbian 20.08.21 Buster / Linux 5.8.17-rockchip64   
    Ok interesting.
     
    On my side I don't see any iperf difference increasing MTU beside seeing more retries.
     
    Here my perf with default MTU 1500.
     
    Helios64 Transmitting (TX checksum offload disabled)
    ----------------------------------------------------------- Server listening on 5201 ----------------------------------------------------------- Accepted connection from 10.10.10.254, port 59252 [ 5] local 10.10.10.2 port 5201 connected to 10.10.10.254 port 59254 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.01 sec 177 MBytes 1.48 Gbits/sec 0 4.29 MBytes [ 5] 1.01-2.01 sec 224 MBytes 1.88 Gbits/sec 0 4.29 MBytes [ 5] 2.01-3.01 sec 224 MBytes 1.88 Gbits/sec 0 4.29 MBytes [ 5] 3.01-4.01 sec 221 MBytes 1.85 Gbits/sec 0 4.29 MBytes [ 5] 4.01-5.01 sec 220 MBytes 1.85 Gbits/sec 0 4.29 MBytes [ 5] 5.01-6.01 sec 218 MBytes 1.82 Gbits/sec 0 4.29 MBytes [ 5] 6.01-7.01 sec 201 MBytes 1.69 Gbits/sec 0 4.29 MBytes [ 5] 7.01-8.01 sec 222 MBytes 1.87 Gbits/sec 0 4.29 MBytes [ 5] 8.01-9.01 sec 216 MBytes 1.81 Gbits/sec 0 4.29 MBytes [ 5] 9.01-10.00 sec 220 MBytes 1.86 Gbits/sec 0 4.29 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 2.09 GBytes 1.80 Gbits/sec 0 sender  
    Helios64 Receiving
    ----------------------------------------------------------- Server listening on 5201 ----------------------------------------------------------- Accepted connection from 10.10.10.254, port 59256 [ 5] local 10.10.10.2 port 5201 connected to 10.10.10.254 port 59258 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 261 MBytes 2.19 Gbits/sec [ 5] 1.00-2.00 sec 281 MBytes 2.35 Gbits/sec [ 5] 2.00-3.00 sec 281 MBytes 2.35 Gbits/sec [ 5] 3.00-4.00 sec 280 MBytes 2.35 Gbits/sec [ 5] 4.00-5.00 sec 280 MBytes 2.35 Gbits/sec [ 5] 5.00-6.00 sec 281 MBytes 2.35 Gbits/sec [ 5] 6.00-7.00 sec 281 MBytes 2.35 Gbits/sec [ 5] 7.00-8.00 sec 281 MBytes 2.35 Gbits/sec [ 5] 8.00-9.00 sec 281 MBytes 2.35 Gbits/sec [ 5] 9.00-10.00 sec 280 MBytes 2.35 Gbits/sec [ 5] 10.00-10.00 sec 601 KBytes 2.37 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate [ 5] 0.00-10.00 sec 2.72 GBytes 2.34 Gbits/sec receiver  
    Just for the sake of clarification because iperf can be confusing. By default iperf client test upload speed.
     
    iperf -c <helios64-ip>  = test RX speed of Helios64
     
    iperd -c <helios64-ip> -R = test TX speed of Helios64
     
  20. Like
    gprovost reacted to aprayoga in USB-C DAS (Is it or is it NOT supported?)   
    @slymanjojo to use DAS on LK 5.9, you would need to configure the USB-C as device mode.
     
    In your helios64, create a file dwc3-0-device.dts contains following code
    /dts-v1/; /plugin/; / { compatible = "rockchip,rk3399"; fragment@0 { target = <&usbdrd_dwc3_0>; __overlay__ { dr_mode = "peripheral"; }; }; }; add and enable to user overlay
    sudo armbian-add-overlay dwc3-0-device.dts and reboot
     
    Now, you can follow instruction on our wiki to enable DAS. If you are using Windows, you might need to unplug and replug the usb cable after enabling the DAS mode. 
    ---
    If you want to use USB-C in Host mode, you need to remove folloing line from /boot/armbianEnv.txt
    user_overlays=dwc3-0-device and enable host mode overlay on armbian-config > System > Hardware and tick dwc3-0-host
  21. Like
    gprovost got a reaction from Zageron in Network Performance on Armbian 20.08.21 Buster / Linux 5.8.17-rockchip64   
    @Zageron Yeah please formulate a clear question.
     
    To check if tx offloading is on or off
     
    ethtool -k eth1 | grep tx-checksum  
    Currently TX checksum offload for all network interfaces on rockchip64 family is disable in Armbian by default.
  22. Like
    gprovost reacted to Igor in How to run openmediavault and citadel-server on Helios64?   
    I am almost certain you will run into the same problem on a X86 machine. Armbian is 100% compatible on userland level and OMV is a bulky suite which dramatically changes OS ... you can see OMV forums how to deal with those problems. IMO,  to be on a safe side, you will need to use the power of Docker. Armbian fully supports it for years.
  23. Like
    gprovost got a reaction from jmanes in wakeonlan support   
    For now WoL feature is not yet supported on Helios64. It's our on TODO list. Once ready we will announce it and add a page on our wiki to explain how to use..
     
    We cannot provide ETA because it's link to an issue (failure to resume from suspend mode) that we haven't resolved yet.
  24. Like
    gprovost got a reaction from TRS-80 in Slow write speeds. Helios64. (tested SATA and USB UAS DAS)   
    I didn't read properly your first message, and miss out a very important information :  your disk formatted in NTFS :-/  You know that there can be a big write performance penalty using NTFS partition on Linux since it is handle in userspace via libfuse.
     
    Can you do a local write test with dd to the NTFS partition to see the performance ?
     
    Honestly you should migrate to another file system than NTFS.
     
    I did some test again and I max out gigabit when writing from PC --> Helios64 over SMB or FTP.
  25. Like
    gprovost reacted to retrack in Very noisy fans   
    Then yes the default fancontrol was super conservative. I have been running your suggestion a few days and I am now graphing temperature of SOC and disks to get better insights.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines