akschu

  • Posts

    6
  • Joined

  • Last visited

Reputation Activity

  1. Like
    akschu got a reaction from clostro in Stability issues. I get crashes compiling ZFS when clock is > 408mhz   
    I'm working on a new distro which is an ARM port of slackware64-15.0 beta.  The cooked image is here: http://mirrors.aptalaska.net/slackware/slarm64/images/helios64/
     
    This image is built using the build scripts at https://gitlab.com/sndwvs/images_build_kit/-/tree/arm/ 
     
    The high level view is that it's downloading the kernel from git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git (linux-5.12.y) then applying the patches found at https://gitlab.com/sndwvs/images_build_kit/-/tree/arm/patch/kernel/rk3399-next then builds the kernel with config https://gitlab.com/sndwvs/images_build_kit/-/blob/arm/config/kernel/linux-rk3399-next.config 
     
    This is very similar to armbian, and the system seems to work just fine at first, but I cannot build ZFS without segfaults or other crashes.  I figured it was overheating or something so I added
    echo 255 > /sys/class/hwmon/hwmon3/pwm1
    echo 255 > /sys/class/hwmon/hwmon4/pwm1
    which turns the fans on full.  The system gets very loud and nowhere near the 100C critical threshold, but when one of the fast cores (4-5) runs at 100% for more than a minute the compiler crashes (no reboots).
     
    I started lowering the cpu frequency lower and lower until I could get through a build and the fastest I can set the CPU and be stable is 408mhz:
    cpufreq-set --cpu 0 --freq 408000
    cpufreq-set --cpu 4 --freq 408000
     
    I also tried changing the voltage regulator per another post I found:
    regulator dev vdd_log
    regulator value 930000
    regulator dev vdd_center
    regulator value 950000
     
    But that didn't help. 
     
    Any other ideas on how to make this box stable at more than 408mhz?  It would be great to have slackware on it and be able to do things with it.
     
    Thanks,
    Matt
  2. Like
    akschu got a reaction from TRS-80 in Stability issues on 20.08.21   
    Did more testing over the weekend on 5.9.9.  I was able to benchmark with FIO on top of a ZFS dataset for hours with the load average hitting 10+ while scrubing the datastore.  No issues.  Right nowt he uptime is 3 days. 
     
    I'm actually a little surprised at the performance.  It's very decent for what it is. 
     
    I wonder if the fact that I'm running ZFS and 5.9.9 while others are using mdadm and 5.8 is the difference.  I'm not really planning on going backwards on either.  If 5.9.9 works then no need to build another kernel, and you would have to pry ZFS out of my cold dead hands.  I've spend enough of my life layering encryption/compression on top of partitions on top of volume management on top of partitions on top of disks.  ZFS is just better, and having performance penalty free snapshops that I can replicate to other hosts over SSH is the icing on the cake. 
     
  3. Like
    akschu got a reaction from TRS-80 in Stability issues on 20.08.21   
    Did more testing over the weekend on 5.9.9.  I was able to benchmark with FIO on top of a ZFS dataset for hours with the load average hitting 10+ while scrubing the datastore.  No issues.  Right nowt he uptime is 3 days. 
     
    I'm actually a little surprised at the performance.  It's very decent for what it is. 
     
    I wonder if the fact that I'm running ZFS and 5.9.9 while others are using mdadm and 5.8 is the difference.  I'm not really planning on going backwards on either.  If 5.9.9 works then no need to build another kernel, and you would have to pry ZFS out of my cold dead hands.  I've spend enough of my life layering encryption/compression on top of partitions on top of volume management on top of partitions on top of disks.  ZFS is just better, and having performance penalty free snapshops that I can replicate to other hosts over SSH is the icing on the cake. 
     
  4. Like
    akschu got a reaction from TRS-80 in Stability issues on 20.08.21   
    I'm been testing my Helios64 as well.  I'm running armbian 20.08.21 Focal, but I also downloaded the kernel builder script thingy from github and built linux-image-current-rockchip64-20.11.0-trunk which is a 5.9.9 kernel.  Installed that, then built openzfs 2.0.0-rc6.   I then proceeded to syncoid 2.15TB of snapshots to it also while doing a scrub and was able to get the load average up to 10+.  The machine ran through the night, so I think it might be stable.  A few more days testing will validate this.
     
    schu