grek
-
Posts
21 -
Joined
-
Last visited
Reputation Activity
-
grek reacted to prahal in Helios64 - Armbian 23.08 Bookworm issues (solved)
Started work on syncing the helios64 dts to upstream for 6.9: https://github.com/prahal/build/tree/helios64-6.9 .
I removed the overclock disabling patch as the overclock as it disables an overclock that is at least nowadays not in the included rk3399-opp.dtsi (ie cluster0 has no opp6 and cluster no opp8). It was not a high priority beforehand but as the helios64 dts starts to change, thus carries unnecessary work.
The pachtset applies. The helios64 dts compile fine to dtb.
Kernel built and booted. (see below for network connection, ie ethernet MAC fixup)
There are not many functional changes on my side (there are in upstream dts). There are a few differences with upstream dts I did not bring as I don't know if they are leftover from the initial patch set or new fixups. But this could already have been an issue before 3.9 for most of these changes (there is at least a new upstream change 93b36e1d3748c352a70c69aa378715e6572e51d1 "arm64: dts: rockchip: Fix USB interface compatible string on kobol-helios64") I brought forward.
I also brought in vcc3v0_sd node "enable-active-high;" and "gpio = <&gpio0 RK_PA1 GPIO_ACTIVE_HIGH>;".
Beware.
ethernet MAC change (now working as designed). The fact I kept the aliases from upstream fixes the ability for the eth0 (then renamed to end0 by armbian-hardware-optimization) ethernet mac - grabbed from OTP via SPI in u-boot to be applied. Thus if you bound the MAC address that was generated by the kernel instead of from the hardware to a host and IP, you will have to find the new IP assigned by the DHCP server to connect to the helios64.
I believe this is fine for edge even if not for current (so should be good for 6.9?).
-
grek got a reaction from SIGSEGV in Helios64 - Armbian 23.08 Bookworm issues (solved)
Thanks guys,
Last time I used Armbian_23.5.4 with 5.15 kernel , and I had about 60 days of uptime. Today I switched for Armbian 24.05 with DTB patched .
Im using ZFS so only what Im do, is working with older packages, and do a hold with future updates
I have SMB and NFS configured, I ran some tests and everything looks very good.
Even my power consumption is down (I have 2x WD RED 4TB + Exos 18TB + WD RED 14TB + 1 SSD) and now its about 23 W
Currently I have 'previous' values of cpu freq: (to have some compare)
root@helios64:~# cat /etc/default/cpufrequtils
ENABLE=true
MIN_SPEED=408000
MAX_SPEED=1200000
GOVERNOR=ondemand
-
grek reacted to BinaryWaves in 5.15 broke zfs-dkms
Hi @grek I had some kernel panics and instability on my Helios64 but mostly this was fixed from limiting the CPU clock as Helios had some issues at full speed. I usually set it to the 1400 option to be safe but I believe the 1600 is fine too? Need to test again. I just had to reinstall because my OMV was messed up so I'll see how it goes. Sometimes freezing the kernel upgrades might help too but I'm going to try and let it update to 22.08.6 tonight and see how it does with OMV. Will report back later.
-
grek reacted to aprayoga in Kobol Team is pulling the plug ;(
Hi everyone, thank you for all of your support.
I'll be still around but not in full time as before.
I saw Helios64 has several issues in this new release. I will start looking into it.
-
grek got a reaction from gprovost in Crazy instability :(
I have tried to changed cpu governor to : ondemand
root@helios64:/# cat /etc/default/cpufrequtils ENABLE="true" GOVERNOR=ondemand MAX_SPEED=1800000 MIN_SPEED=408000
but after 5min, my server got rebooted .
Previously I used :
cat /etc/default/cpufrequtils ENABLE="true" GOVERNOR=performance MAX_SPEED=1416000 MIN_SPEED=1416000
for last 3 kernel updates without any issue. Uptime's was mostly about 2-3 weeks without kernel panic or reboots
Unfortunately I don't have nothing on my logs. I have changed verbose mode to 7 ,and Im logging to the file my console. I will update when I get error.
I have ZFS mirror ( LUKS on the top) + ZFS cache
Also I din't changed the disks scheduler. I have bfq , but it was worked since last upgrade without any problem.
currently I ran zfs scrub ... we will see.
[update 1] :
root@helios64:/# uptime
11:02:40 up 2:28, 1 user, load average: 4.80, 4.81, 4.60]
scrub is still in progress ,but I got few read errors. I'm not sure if it problem with HDD connector, Hdd scheduler , or cpu freq....
I didn't have it before.
[update 2]:
Server has been rebooted .....
last 'tasks' what i remember was generating thumbnails by nextcloud .
Unfortunately nothing in the console (it was connected all the time ) with:
root@helios64:~# cat /boot/armbianEnv.txt verbosity=7 console=serial extraargs=earlyprintk ignore_loglevel
failed command: READ FPDMA QUEUED , errors comes from zfs scrum few hours ealier
-
grek got a reaction from iav in ZFS on Helios64
Hey,
I think a dedicated topic for ZFS on Helios 64 is needed. Many people want to have it
As we know, and many of us playing with dev builds for helios ,it is no easy to work with ZFS
I wrote few scripts maybe someone of You it can help in someway.
Thanks @jbergler and @ShadowDance , I used your idea for complete it.
The problem is with playing with OMV and ZFS plugin. then dependencies remove our latest version of ZFS. I tested. everything survived but ZFS is downgraded to latest version from repo
for example:
root@helios64:~# zfs --version
zfs-0.8.4-2~bpo10+1
zfs-kmod-2.0.0-rc6
root@helios64:~# uname -a
Linux helios64 5.9.11-rockchip64 #trunk.1 SMP PREEMPT Thu Nov 26 01:32:45 CET 2020 aarch64 GNU/Linux
root@helios64:~#
I tested it with kernel 5.9.10 and with today clean install with 5.9.11
First we need to have docker installed ( armbian-config -> software -> softy -> docker )
Create dedicated directory with Dockerfile ( we will customize ubuntu:bionic image with required libraries and gcc 10 . in next builds we can skip this step and just customize and run build-zfs.sh script)
mkdir zfs-builder
cd zfs-builder
vi Dockerfile
FROM ubuntu:bionic RUN apt update; apt install build-essential autoconf automake bison flex libtool gawk alien fakeroot dkms libblkid-dev uuid-dev libudev-dev libssl-dev zlib1g-dev libaio-dev libattr1-dev libelf-dev python3 python3-dev python3-setuptools python3-cffi libffi-dev -y; apt install software-properties-common -y; add-apt-repository ppa:ubuntu-toolchain-r/test; apt install gcc-10 g++-10 -y; update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-10 10; update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-10 10
Build docker image for building purposes.
docker build --tag zfs-build-ubuntu-bionic:0.1 .
Create script for ZFS build
vi build-zfs.sh
#!/bin/bash #define zfs version zfsver="zfs-2.0.0-rc6" #creating building directory mkdir /tmp/zfs-builds && cd "$_" rm -rf /tmp/zfs-builds/inside_zfs.sh apt-get download linux-headers-current-rockchip64 git clone -b $zfsver https://github.com/openzfs/zfs.git $zfsver-$(uname -r) #create file to execute inside container echo "creating file to execute it inside container" cat > /tmp/zfs-builds/inside_zfs.sh <<EOF #!/bin/bash cd scratch/ dpkg -i linux-headers-current-*.deb zfsver="zfs-2.0.0-rc6" cd "/scratch/$zfsver-$(uname -r)" sh autogen.sh ./configure make -s -j$(nproc) make deb mkdir "/scratch/deb-$zfsver-$(uname -r)" cp *.deb "/scratch/deb-$zfsver-$(uname -r)" rm -rf "/scratch/$zfsver-$(uname -r)" exit EOF chmod +x /tmp/zfs-builds/inside_zfs.sh echo "" echo "####################" echo "starting container.." echo "####################" echo "" docker run --rm -it -v /tmp/zfs-builds:/scratch zfs-build-ubuntu-bionic:0.1 /bin/bash /scratch/inside_zfs.sh # Cleanup packages (if installed). modprobe -r zfs zunicode zzstd zlua zcommon znvpair zavl icp spl apt remove --yes zfsutils-linux zfs-zed zfs-initramfs apt autoremove --yes dpkg -i "/tmp/zfs-builds/deb-$zfsver-$(uname -r)"/kmod-zfs-$(uname -r)*.deb dpkg -i "/tmp/zfs-builds/deb-$zfsver-$(uname -r)"/{libnvpair1,libuutil1,libzfs2,libzpool2,python3-pyzfs,zfs}_*.deb echo "" echo "###################" echo "building complete" echo "###################" echo ""
chmod +x build-zfs.sh
screen -L -Logfile buildlog.txt ./build-zfs-final.sh
-
grek got a reaction from gprovost in Can't access OMV after first install
to install OMV You should login via SSH to your helios then:
root@helios64:~# armbian-config
root@helios64:~# armbian-config Software -> Softy -> OMV
You can verify if it installed:
root@helios64:~# dpkg -l |grep openmediavault ii openmediavault 5.5.17-3 all openmediavault - The open network attached storage solution ii openmediavault-fail2ban 5.0.5 all OpenMediaVault Fail2ban plugin ii openmediavault-flashmemory 5.0.7 all folder2ram plugin for OpenMediaVault ii openmediavault-keyring 1.0 all GnuPG archive keys of the OpenMediaVault archive ii openmediavault-luksencryption 5.0.2 all OpenMediaVault LUKS encryption plugin ii openmediavault-omvextrasorg 5.4.2 all OMV-Extras.org Package Repositories for OpenMediaVault ii openmediavault-zfs 5.0.5 arm64 OpenMediaVault plugin for ZFS root@helios64:~#
-
grek got a reaction from Atticka in ZFS on Helios64
Hey,
I think a dedicated topic for ZFS on Helios 64 is needed. Many people want to have it
As we know, and many of us playing with dev builds for helios ,it is no easy to work with ZFS
I wrote few scripts maybe someone of You it can help in someway.
Thanks @jbergler and @ShadowDance , I used your idea for complete it.
The problem is with playing with OMV and ZFS plugin. then dependencies remove our latest version of ZFS. I tested. everything survived but ZFS is downgraded to latest version from repo
for example:
root@helios64:~# zfs --version
zfs-0.8.4-2~bpo10+1
zfs-kmod-2.0.0-rc6
root@helios64:~# uname -a
Linux helios64 5.9.11-rockchip64 #trunk.1 SMP PREEMPT Thu Nov 26 01:32:45 CET 2020 aarch64 GNU/Linux
root@helios64:~#
I tested it with kernel 5.9.10 and with today clean install with 5.9.11
First we need to have docker installed ( armbian-config -> software -> softy -> docker )
Create dedicated directory with Dockerfile ( we will customize ubuntu:bionic image with required libraries and gcc 10 . in next builds we can skip this step and just customize and run build-zfs.sh script)
mkdir zfs-builder
cd zfs-builder
vi Dockerfile
FROM ubuntu:bionic RUN apt update; apt install build-essential autoconf automake bison flex libtool gawk alien fakeroot dkms libblkid-dev uuid-dev libudev-dev libssl-dev zlib1g-dev libaio-dev libattr1-dev libelf-dev python3 python3-dev python3-setuptools python3-cffi libffi-dev -y; apt install software-properties-common -y; add-apt-repository ppa:ubuntu-toolchain-r/test; apt install gcc-10 g++-10 -y; update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-10 10; update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-10 10
Build docker image for building purposes.
docker build --tag zfs-build-ubuntu-bionic:0.1 .
Create script for ZFS build
vi build-zfs.sh
#!/bin/bash #define zfs version zfsver="zfs-2.0.0-rc6" #creating building directory mkdir /tmp/zfs-builds && cd "$_" rm -rf /tmp/zfs-builds/inside_zfs.sh apt-get download linux-headers-current-rockchip64 git clone -b $zfsver https://github.com/openzfs/zfs.git $zfsver-$(uname -r) #create file to execute inside container echo "creating file to execute it inside container" cat > /tmp/zfs-builds/inside_zfs.sh <<EOF #!/bin/bash cd scratch/ dpkg -i linux-headers-current-*.deb zfsver="zfs-2.0.0-rc6" cd "/scratch/$zfsver-$(uname -r)" sh autogen.sh ./configure make -s -j$(nproc) make deb mkdir "/scratch/deb-$zfsver-$(uname -r)" cp *.deb "/scratch/deb-$zfsver-$(uname -r)" rm -rf "/scratch/$zfsver-$(uname -r)" exit EOF chmod +x /tmp/zfs-builds/inside_zfs.sh echo "" echo "####################" echo "starting container.." echo "####################" echo "" docker run --rm -it -v /tmp/zfs-builds:/scratch zfs-build-ubuntu-bionic:0.1 /bin/bash /scratch/inside_zfs.sh # Cleanup packages (if installed). modprobe -r zfs zunicode zzstd zlua zcommon znvpair zavl icp spl apt remove --yes zfsutils-linux zfs-zed zfs-initramfs apt autoremove --yes dpkg -i "/tmp/zfs-builds/deb-$zfsver-$(uname -r)"/kmod-zfs-$(uname -r)*.deb dpkg -i "/tmp/zfs-builds/deb-$zfsver-$(uname -r)"/{libnvpair1,libuutil1,libzfs2,libzpool2,python3-pyzfs,zfs}_*.deb echo "" echo "###################" echo "building complete" echo "###################" echo ""
chmod +x build-zfs.sh
screen -L -Logfile buildlog.txt ./build-zfs-final.sh
-
grek got a reaction from gprovost in Helios64 - freeze whatever the kernel is.
Did You tried to jump armbian 20.11 and kernel 5.9. 10. ?
Currently Im using 5.9.10 begin friday without any issues. Before it 5.9.9? and i successfully copied more than 2 TB data. (zfs - raid1)
Also i ran docker and compiled few times ZFS from sources . The load was big but i don't see any issues.
-
grek got a reaction from TonyMac32 in ZFS on Helios64
Hey,
I think a dedicated topic for ZFS on Helios 64 is needed. Many people want to have it
As we know, and many of us playing with dev builds for helios ,it is no easy to work with ZFS
I wrote few scripts maybe someone of You it can help in someway.
Thanks @jbergler and @ShadowDance , I used your idea for complete it.
The problem is with playing with OMV and ZFS plugin. then dependencies remove our latest version of ZFS. I tested. everything survived but ZFS is downgraded to latest version from repo
for example:
root@helios64:~# zfs --version
zfs-0.8.4-2~bpo10+1
zfs-kmod-2.0.0-rc6
root@helios64:~# uname -a
Linux helios64 5.9.11-rockchip64 #trunk.1 SMP PREEMPT Thu Nov 26 01:32:45 CET 2020 aarch64 GNU/Linux
root@helios64:~#
I tested it with kernel 5.9.10 and with today clean install with 5.9.11
First we need to have docker installed ( armbian-config -> software -> softy -> docker )
Create dedicated directory with Dockerfile ( we will customize ubuntu:bionic image with required libraries and gcc 10 . in next builds we can skip this step and just customize and run build-zfs.sh script)
mkdir zfs-builder
cd zfs-builder
vi Dockerfile
FROM ubuntu:bionic RUN apt update; apt install build-essential autoconf automake bison flex libtool gawk alien fakeroot dkms libblkid-dev uuid-dev libudev-dev libssl-dev zlib1g-dev libaio-dev libattr1-dev libelf-dev python3 python3-dev python3-setuptools python3-cffi libffi-dev -y; apt install software-properties-common -y; add-apt-repository ppa:ubuntu-toolchain-r/test; apt install gcc-10 g++-10 -y; update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-10 10; update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-10 10
Build docker image for building purposes.
docker build --tag zfs-build-ubuntu-bionic:0.1 .
Create script for ZFS build
vi build-zfs.sh
#!/bin/bash #define zfs version zfsver="zfs-2.0.0-rc6" #creating building directory mkdir /tmp/zfs-builds && cd "$_" rm -rf /tmp/zfs-builds/inside_zfs.sh apt-get download linux-headers-current-rockchip64 git clone -b $zfsver https://github.com/openzfs/zfs.git $zfsver-$(uname -r) #create file to execute inside container echo "creating file to execute it inside container" cat > /tmp/zfs-builds/inside_zfs.sh <<EOF #!/bin/bash cd scratch/ dpkg -i linux-headers-current-*.deb zfsver="zfs-2.0.0-rc6" cd "/scratch/$zfsver-$(uname -r)" sh autogen.sh ./configure make -s -j$(nproc) make deb mkdir "/scratch/deb-$zfsver-$(uname -r)" cp *.deb "/scratch/deb-$zfsver-$(uname -r)" rm -rf "/scratch/$zfsver-$(uname -r)" exit EOF chmod +x /tmp/zfs-builds/inside_zfs.sh echo "" echo "####################" echo "starting container.." echo "####################" echo "" docker run --rm -it -v /tmp/zfs-builds:/scratch zfs-build-ubuntu-bionic:0.1 /bin/bash /scratch/inside_zfs.sh # Cleanup packages (if installed). modprobe -r zfs zunicode zzstd zlua zcommon znvpair zavl icp spl apt remove --yes zfsutils-linux zfs-zed zfs-initramfs apt autoremove --yes dpkg -i "/tmp/zfs-builds/deb-$zfsver-$(uname -r)"/kmod-zfs-$(uname -r)*.deb dpkg -i "/tmp/zfs-builds/deb-$zfsver-$(uname -r)"/{libnvpair1,libuutil1,libzfs2,libzpool2,python3-pyzfs,zfs}_*.deb echo "" echo "###################" echo "building complete" echo "###################" echo ""
chmod +x build-zfs.sh
screen -L -Logfile buildlog.txt ./build-zfs-final.sh
-
grek got a reaction from gprovost in ZFS on Helios64
Hey,
I think a dedicated topic for ZFS on Helios 64 is needed. Many people want to have it
As we know, and many of us playing with dev builds for helios ,it is no easy to work with ZFS
I wrote few scripts maybe someone of You it can help in someway.
Thanks @jbergler and @ShadowDance , I used your idea for complete it.
The problem is with playing with OMV and ZFS plugin. then dependencies remove our latest version of ZFS. I tested. everything survived but ZFS is downgraded to latest version from repo
for example:
root@helios64:~# zfs --version
zfs-0.8.4-2~bpo10+1
zfs-kmod-2.0.0-rc6
root@helios64:~# uname -a
Linux helios64 5.9.11-rockchip64 #trunk.1 SMP PREEMPT Thu Nov 26 01:32:45 CET 2020 aarch64 GNU/Linux
root@helios64:~#
I tested it with kernel 5.9.10 and with today clean install with 5.9.11
First we need to have docker installed ( armbian-config -> software -> softy -> docker )
Create dedicated directory with Dockerfile ( we will customize ubuntu:bionic image with required libraries and gcc 10 . in next builds we can skip this step and just customize and run build-zfs.sh script)
mkdir zfs-builder
cd zfs-builder
vi Dockerfile
FROM ubuntu:bionic RUN apt update; apt install build-essential autoconf automake bison flex libtool gawk alien fakeroot dkms libblkid-dev uuid-dev libudev-dev libssl-dev zlib1g-dev libaio-dev libattr1-dev libelf-dev python3 python3-dev python3-setuptools python3-cffi libffi-dev -y; apt install software-properties-common -y; add-apt-repository ppa:ubuntu-toolchain-r/test; apt install gcc-10 g++-10 -y; update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-10 10; update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-10 10
Build docker image for building purposes.
docker build --tag zfs-build-ubuntu-bionic:0.1 .
Create script for ZFS build
vi build-zfs.sh
#!/bin/bash #define zfs version zfsver="zfs-2.0.0-rc6" #creating building directory mkdir /tmp/zfs-builds && cd "$_" rm -rf /tmp/zfs-builds/inside_zfs.sh apt-get download linux-headers-current-rockchip64 git clone -b $zfsver https://github.com/openzfs/zfs.git $zfsver-$(uname -r) #create file to execute inside container echo "creating file to execute it inside container" cat > /tmp/zfs-builds/inside_zfs.sh <<EOF #!/bin/bash cd scratch/ dpkg -i linux-headers-current-*.deb zfsver="zfs-2.0.0-rc6" cd "/scratch/$zfsver-$(uname -r)" sh autogen.sh ./configure make -s -j$(nproc) make deb mkdir "/scratch/deb-$zfsver-$(uname -r)" cp *.deb "/scratch/deb-$zfsver-$(uname -r)" rm -rf "/scratch/$zfsver-$(uname -r)" exit EOF chmod +x /tmp/zfs-builds/inside_zfs.sh echo "" echo "####################" echo "starting container.." echo "####################" echo "" docker run --rm -it -v /tmp/zfs-builds:/scratch zfs-build-ubuntu-bionic:0.1 /bin/bash /scratch/inside_zfs.sh # Cleanup packages (if installed). modprobe -r zfs zunicode zzstd zlua zcommon znvpair zavl icp spl apt remove --yes zfsutils-linux zfs-zed zfs-initramfs apt autoremove --yes dpkg -i "/tmp/zfs-builds/deb-$zfsver-$(uname -r)"/kmod-zfs-$(uname -r)*.deb dpkg -i "/tmp/zfs-builds/deb-$zfsver-$(uname -r)"/{libnvpair1,libuutil1,libzfs2,libzpool2,python3-pyzfs,zfs}_*.deb echo "" echo "###################" echo "building complete" echo "###################" echo ""
chmod +x build-zfs.sh
screen -L -Logfile buildlog.txt ./build-zfs-final.sh