SteeMan Posted May 6 Share Posted May 6 @ebin-dev Thanks for the explanation (i failed to notice that you were setting io_is_busy to a different value). I know that you are still working to stabilize this board. But at the end of this process, it should be possible to submit a PR to incorporate your settings into the base builds. Thus I'm looking for things that will make that difficult. I just added a comment to PR6507 stating that the way io_is_busy is now set doesn't allow it to be overridden if needed. 0 Quote Link to comment Share on other sites More sharing options...
ebin-dev Posted May 7 Author Share Posted May 7 (edited) On 5/6/2024 at 10:52 PM, SteeMan said: @ebin-dev Thanks for the explanation (i failed to notice that you were setting io_is_busy to a different value). I know that you are still working to stabilize this board. But at the end of this process, it should be possible to submit a PR to incorporate your settings into the base builds. Thus I'm looking for things that will make that difficult. I just added a comment to PR6507 stating that the way io_is_busy is now set doesn't allow it to be overridden if needed. Thanks for the initiative with the PR. Here is what happens: The cpufreqpolicy sets a generic value for the sampling_rate of 200000 (this is how often the governor’s worker routine should run, in microseconds). The sampling_rate should be set to a nominal value close to the cpuinfo_transition_latency (49000 and 40000 nanoseconds for the clusters of rk3399) such that it is effectively about 1000 times as large. For rk3399 boards the cluster sampling_rates of 49000 (big) and 40000 (little) are very responsive and not too demanding for the cpus. In that case no problems occur if io_is_busy is set to 1 ! I added a comment to PR6507 and requested that the sampling_rate should be a parameter (per cpu cluster) and be set to the value of the cpuinfo_transition_latency of that cpu. The fine tuned values for Helios64 are those (io_is_busy set to 1 again, sampling_rate set to more responsive values): for cpufreqpolicy in 0 4 ; do echo 1 > /sys/devices/system/cpu/cpufreq/policy${cpufreqpolicy}/ondemand/io_is_busy echo 25 > /sys/devices/system/cpu/cpufreq/policy${cpufreqpolicy}/ondemand/up_threshold echo 10 > /sys/devices/system/cpu/cpufreq/policy${cpufreqpolicy}/ondemand/sampling_down_factor echo $(cat /sys/devices/system/cpu/cpufreq/policy${cpufreqpolicy}/cpuinfo_transition_latency) > /sys/devices/system/cpu/cpufreq/policy${cpufreqpolicy}/ondemand/sampling_rate done Edit: With these changes the timeout issues of the 2.5G interface are resolved for 6.6.29 and 6.6.30 too. Helios64 is one of the best arm based NAS systems I have seen so far. I am not going to replace it anytime soon. Edited Wednesday at 11:55 AM by ebin-dev 0 Quote Link to comment Share on other sites More sharing options...
BipBip1981 Posted May 9 Share Posted May 9 (edited) Hi, i use on my own build from scratch with framework official Armbian 24.05 with kernel 6.6.30 and DTB file send by ebin-dev here with 408-1800mhz ondemand governor and for the first time since I have it, it is stable and usable with my case of uses and it pass my pattern stability test ! Many thanks to ebin-dev for the great work. Edited May 9 by BipBip1981 0 Quote Link to comment Share on other sites More sharing options...
ebin-dev Posted Tuesday at 05:26 PM Author Share Posted Tuesday at 05:26 PM (edited) There are new Armbian 24.05 images available on the Helios64 download page: both images Bookworm minimal and Jammy Desktop are based on linux 6.6.30 (download them !). Again, the rtl_nic firmware (in /lib/firmware/rtl_nic) should be replaced by the version downloaded from git.kernel.org, such that the 2.5G LAN interface works correctly. I would also recommend to copy the dtb attached below to /boot/dtb/rockchip/rk3399-kobol-helios64.dtb. It includes the 75mV bump of the opp states for the fast cores as suggested by @prahal, it enables the L2 cache info and it enables hs400 speed on emmc again. In particular the 75mV bump has a very positive effect on stability. The bootloader that comes with it would appear to contain the Rockchip DDR blob. It should be fine. If you have an issue with u-boot, just flash linux-u-boot-edge-helios64_22.02.1_arm64 as recommended before. The cpufreq ondemand governor is still the best choice. Good settings are # cat /etc/default/cpufrequtils ENABLE=true MIN_SPEED=600000 MAX_SPEED=1800000 GOVERNOR=ondemand Enjoy. P.S.: If you like a system more responsive to server tasks or push the 2.5G interface to the limits, some fine tuning is helpful. rk3399-kobol-helios64.dtb-6.6.30-L2-hs400-opp Edited Friday at 07:01 PM by ebin-dev 2 Quote Link to comment Share on other sites More sharing options...
SIGSEGV Posted Thursday at 08:12 PM Share Posted Thursday at 08:12 PM (edited) Thank you @ebin-dev for the dtb file & @prahal for figuring out what changes to make. I just reinstalled my NAS with the 6.6.30 kernel and the bookwork image. The dtb file is the only change I've made to configuration. Haven't had any issues yet - so far recompiling ZFS KMOD 2.2.4 to run with new kernel hasn't crashed - MAX CPU freq. for the BIG cores is 1.8GHz NFS and SMB are working fine. sbc-bench is stable as well. Edited Saturday at 01:53 AM by SIGSEGV 0 Quote Link to comment Share on other sites More sharing options...
snakekick Posted Friday at 01:17 PM Share Posted Friday at 01:17 PM (edited) Thank you @ebin-dev same here, 6.6.29 (much more) stable with your dtb. I know we are all the community and can make changes but... is it possible for you to make a ticket/change request or as its called to add your changes to the main armbian image? so that we have a good running Helios out of the box? That would be fantastic! Edited Friday at 01:18 PM by snakekick 0 Quote Link to comment Share on other sites More sharing options...
ebin-dev Posted Friday at 03:20 PM Author Share Posted Friday at 03:20 PM Actually the changes to the dtb were figured out and were proposed by @prahal and therefore it would be up to Prahal to submit a PR. From my perspective there is nothing against a PR right now as the changes are sufficiently tested and have a very positive effect on stability. 2 Quote Link to comment Share on other sites More sharing options...
grek Posted Friday at 06:15 PM Share Posted Friday at 06:15 PM Thanks guys, Last time I used Armbian_23.5.4 with 5.15 kernel , and I had about 60 days of uptime. Today I switched for Armbian 24.05 with DTB patched . Im using ZFS so only what Im do, is working with older packages, and do a hold with future updates I have SMB and NFS configured, I ran some tests and everything looks very good. Even my power consumption is down (I have 2x WD RED 4TB + Exos 18TB + WD RED 14TB + 1 SSD) and now its about 23 W Currently I have 'previous' values of cpu freq: (to have some compare) root@helios64:~# cat /etc/default/cpufrequtils ENABLE=true MIN_SPEED=408000 MAX_SPEED=1200000 GOVERNOR=ondemand 1 Quote Link to comment Share on other sites More sharing options...
SIGSEGV Posted Saturday at 01:56 AM Share Posted Saturday at 01:56 AM @ebin-dev - one quick question. Applying this script at boot should change the sampling rate to the desired values? for cpufreqpolicy in 0 4 ; do echo 1 > /sys/devices/system/cpu/cpufreq/policy${cpufreqpolicy}/ondemand/io_is_busy echo 25 > /sys/devices/system/cpu/cpufreq/policy${cpufreqpolicy}/ondemand/up_threshold echo 10 > /sys/devices/system/cpu/cpufreq/policy${cpufreqpolicy}/ondemand/sampling_down_factor echo $(cat /sys/devices/system/cpu/cpufreq/policy${cpufreqpolicy}/cpuinfo_transition_latency) > /sys/devices/system/cpu/cpufreq/policy${cpufreqpolicy}/ondemand/sampling_rate done 0 Quote Link to comment Share on other sites More sharing options...
ebin-dev Posted Saturday at 06:55 AM Author Share Posted Saturday at 06:55 AM (edited) Due to its current status I have disabled armbian-hardware-optimization (some 'volunteers' are needed to fix it) and run the following code in /etc/rc.local instead. Alternatively you could seek the line 'echo 200000' in armbian-hardware-optimization and change the lines around it accordingly. @SIGSEGV Yes - thereby you set the sampling_rates per cpu cluster to 40000 and 51000 on the current kernel (actually by the line starting with 'echo $(cat ...' ). #!/bin/sh -e # # rc.local # # This script is executed at the end of each multiuser runlevel. # Make sure that the script will "exit 0" on success or any other # value on error. # # In order to enable or disable this script just change the execution # bits. # # By default this script does nothing. cd /sys/devices/system/cpu/cpufreq for cpufreqpolicy in 0 4 ; do echo 1 > policy${cpufreqpolicy}/ondemand/io_is_busy echo 25 > policy${cpufreqpolicy}/ondemand/up_threshold echo 10 > policy${cpufreqpolicy}/ondemand/sampling_down_factor echo $(cat policy${cpufreqpolicy}/cpuinfo_transition_latency) > policy${cpufreqpolicy}/ondemand/sampling_rate done for i in $(awk -F":" "/ahci/ {print \$1}" < /proc/interrupts | sed 's/\ //g'); do echo 30 > /proc/irq/$i/smp_affinity done for i in $(awk -F":" "/xhci/ {print \$1}" < /proc/interrupts | sed 's/\ //g'); do echo 20 > /proc/irq/$i/smp_affinity done exit 0 Edited Saturday at 01:27 PM by ebin-dev 0 Quote Link to comment Share on other sites More sharing options...
ebin-dev Posted Saturday at 09:25 AM Author Share Posted Saturday at 09:25 AM (edited) 18 hours ago, grek said: Currently I have 'previous' values of cpu freq: (to have some compare) root@helios64:~# cat /etc/default/cpufrequtils ENABLE=true MIN_SPEED=408000 MAX_SPEED=1200000 GOVERNOR=ondemand Just a remark: setting MIN_SPEED to 600 MHz would have NO effect on power consumption but required frequency switching would be less: switching between 408 and 600MHz would be avoided and the system could transition directly from 600 MHz to the highest frequency (essentially switching only between two states for each cpu). The huge power savings are impressive! Edited Saturday at 12:17 PM by ebin-dev 0 Quote Link to comment Share on other sites More sharing options...
ebin-dev Posted Saturday at 09:28 PM Author Share Posted Saturday at 09:28 PM (edited) Sorry - I am probably the only one who is interested in network performance of the 2.5G interface (theoretical max 2.35Gbit/s). (iperf 3.17.1 measurements attached) # ./iperf3 -c 192.168.xx.30 -p 5201 (client -> helios64) Connecting to host 192.168.xx.30, port 5201 [ 5] local 192.168.xx.54 port 54011 connected to 192.168.xx.30 port 5201 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.01 sec 284 MBytes 2.37 Gbits/sec [ 5] 1.01-2.01 sec 280 MBytes 2.35 Gbits/sec [ 5] 2.01-3.01 sec 281 MBytes 2.36 Gbits/sec [ 5] 3.01-4.00 sec 278 MBytes 2.34 Gbits/sec [ 5] 4.00-5.01 sec 280 MBytes 2.34 Gbits/sec [ 5] 5.01-6.01 sec 281 MBytes 2.36 Gbits/sec [ 5] 6.01-7.01 sec 280 MBytes 2.35 Gbits/sec [ 5] 7.01-8.01 sec 280 MBytes 2.35 Gbits/sec [ 5] 8.01-9.01 sec 281 MBytes 2.36 Gbits/sec [ 5] 9.01-10.01 sec 281 MBytes 2.35 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate [ 5] 0.00-10.01 sec 2.74 GBytes 2.35 Gbits/sec sender [ 5] 0.00-10.01 sec 2.74 GBytes 2.35 Gbits/sec receiver iperf Done. # ./iperf3 -c 192.168.xx.30 -p 5201 -R (helios64 -> client) Connecting to host 192.168.xx.30, port 5201 Reverse mode, remote host 192.168.xx.30 is sending [ 5] local 192.168.xx.54 port 54013 connected to 192.168.xx.30 port 5201 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.01 sec 276 MBytes 2.30 Gbits/sec [ 5] 1.01-2.01 sec 280 MBytes 2.35 Gbits/sec [ 5] 2.01-3.00 sec 280 MBytes 2.35 Gbits/sec [ 5] 3.00-4.01 sec 280 MBytes 2.35 Gbits/sec [ 5] 4.01-5.01 sec 280 MBytes 2.35 Gbits/sec [ 5] 5.01-6.00 sec 280 MBytes 2.35 Gbits/sec [ 5] 6.00-7.01 sec 281 MBytes 2.35 Gbits/sec [ 5] 7.01-8.00 sec 280 MBytes 2.35 Gbits/sec [ 5] 8.00-9.01 sec 281 MBytes 2.35 Gbits/sec [ 5] 9.01-10.01 sec 280 MBytes 2.35 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.01 sec 2.74 GBytes 2.35 Gbits/sec 16 sender [ 5] 0.00-10.01 sec 2.73 GBytes 2.35 Gbits/sec receiver iperf Done ./iperf3 -c 192.168.xx.30 -p 5201 --bidir (bidirectional) Connecting to host 192.168.xx.30, port 5201 [ 5] local 192.168.xx.54 port 49681 connected to 192.168.xx.30 port 5201 [ 7] local 192.168.xx.54 port 49682 connected to 192.168.xx.30 port 5201 [ ID][Role] Interval Transfer Bitrate [ 5][TX-C] 0.00-1.01 sec 252 MBytes 2.11 Gbits/sec [ 7][RX-C] 0.00-1.01 sec 206 MBytes 1.72 Gbits/sec [ 5][TX-C] 1.01-2.01 sec 252 MBytes 2.11 Gbits/sec [ 7][RX-C] 1.01-2.01 sec 207 MBytes 1.74 Gbits/sec [ 5][TX-C] 2.01-3.01 sec 244 MBytes 2.05 Gbits/sec [ 7][RX-C] 2.01-3.01 sec 213 MBytes 1.78 Gbits/sec [ 5][TX-C] 3.01-4.01 sec 196 MBytes 1.65 Gbits/sec [ 7][RX-C] 3.01-4.01 sec 223 MBytes 1.87 Gbits/sec [ 5][TX-C] 4.01-5.01 sec 221 MBytes 1.86 Gbits/sec [ 7][RX-C] 4.01-5.01 sec 217 MBytes 1.82 Gbits/sec [ 5][TX-C] 5.01-6.01 sec 206 MBytes 1.73 Gbits/sec [ 7][RX-C] 5.01-6.01 sec 223 MBytes 1.87 Gbits/sec [ 5][TX-C] 6.01-7.01 sec 214 MBytes 1.80 Gbits/sec [ 7][RX-C] 6.01-7.01 sec 220 MBytes 1.84 Gbits/sec [ 5][TX-C] 7.01-8.01 sec 215 MBytes 1.80 Gbits/sec [ 7][RX-C] 7.01-8.01 sec 217 MBytes 1.82 Gbits/sec [ 5][TX-C] 8.01-9.01 sec 187 MBytes 1.57 Gbits/sec [ 7][RX-C] 8.01-9.01 sec 224 MBytes 1.88 Gbits/sec [ 5][TX-C] 9.01-10.01 sec 206 MBytes 1.73 Gbits/sec [ 7][RX-C] 9.01-10.01 sec 218 MBytes 1.83 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID][Role] Interval Transfer Bitrate Retr [ 5][TX-C] 0.00-10.01 sec 2.14 GBytes 1.84 Gbits/sec sender [ 5][TX-C] 0.00-10.01 sec 2.14 GBytes 1.84 Gbits/sec receiver [ 7][RX-C] 0.00-10.01 sec 2.12 GBytes 1.82 Gbits/sec 13 sender [ 7][RX-C] 0.00-10.01 sec 2.12 GBytes 1.82 Gbits/sec receiver Edited 57 minutes ago by ebin-dev added bidirectional results 0 Quote Link to comment Share on other sites More sharing options...
TDCroPower Posted 19 hours ago Share Posted 19 hours ago @ebin-dev You are not alone, I also use the 2.5GB interface 😉 0 Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.