tparys
Members-
Posts
204 -
Joined
-
Last visited
Recent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.
-
There's an Armbian provided SystemD service named armbian-led-state with the following description: # Armbian led state save and restore # Stores the current led state at shutdown and restores # it during bootstrap Maybe that's it? If so, you could disable it if it bothers you? Or adjust the script it calls at /usr/lib/armbian/armbian-led-state-save.sh?
-
How you can help test upcoming Armbian 26.02 images?
tparys replied to Igor's topic in Advanced users - Development
Update on NanoPi M4v2 ... I pulled the most recent Armbian build, and manually ran the build for the M4v2, and it fails on kernel patch "general-gpio-driver-no-sleep". See pastebin at https://paste.armbian.com/puvorasuco for failed build. Checking upstream at https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git, it looks like branch 6.18.y already has the subject patch, and was pushed today. Since both the rockchip and rockchip64 families use mainline kernel, I just submitted a PR that kills that patch for both. See https://github.com/armbian/build/pull/9368. -
Sorry in advance, but this is a bit of a dive. But, I haven't seen this fully captured and explained anywhere, and think this would be a useful thing to know for anyone who's doing arm64 development on a PC, which seems very relevant here ... I started pulling this thread when moving some software from Ubuntu Jammy to Noble, where the arm64 build process went from 20 mins to 50 mins. When I was done, it below to 15 min, and all I did was poke some flags for qemu's emulated CPU. The TL;DR version is that setting the environment variable QEMU_CPU to something like "cortex-a53" or "max,pauth-impdef=on" or even what I settled on, "max,pauth=off" saves massive amounts of time, and should be used anywhere you are running debootstrap, apt, or python in an emulated environment. However, you can't blindly set it everywhere because these flags are not defined for other architectures and will cause a hard stop when emulating PC or RISC-V. This has also shown up in the Armbian Build System. If you want to see this first hand, use the example here. Just save the following in test.sh: echo -n "testing... " for i in $(seq 2 10000); do is_prime=1 j=2 while ((j*j <= i)); do if (( i % j == 0)); then is_prime=0 break fi ((j++)) done if (( is_prime == 1 )); then echo $i > /dev/null fi done echo "done" And then run it in docker after ensuring a few things are installed ... ~ $ sudo apt install binfmt-support qemu-user-static ~ $ time docker run --rm -it --platform linux/arm64 -v .:/test ubuntu:noble /test/test.sh testing... done real 0m37.620s user 0m0.013s sys 0m0.022s ~ $ time docker run --rm -it --platform linux/arm64 -v .:/test ubuntu:jammy /test/test.sh testing... done real 0m4.700s user 0m0.011s sys 0m0.023s And we can wrestle that performance back by fiddling with QEMU flags ... ~ $ time docker run --rm -it --platform linux/arm64 -v .:/test --env QEMU_CPU=max,pauth=off ubuntu:noble /test/test.sh testing... done real 0m4.694s user 0m0.011s sys 0m0.024s So qemu bug? Not quite. The qemu emulator is a host application, and is the same both jammy and noble docker images, and I think the root cause was found here, and first appears in Ubuntu Lunar (23.10). The short version looks like gcc's stack protection logic wasn't operating as expected as the stack layout is a little different than it is on PC, a CVE was filed, and the "fix" is now stressing a slow code path in qemu. For the record, my heart goes out to "steev" and his slow, hours-per-bisect march to the answer. Pulling that thread a bit more, the QEMU Documentation has the following to say on the subject of arm64 pointer authentication: The qemu docs also suggest that the qemu impdef algorithm is the default, but I've not seen this on my version. It's possible this may be addressed in a much newer version of qemu, but that's not available in the Noble repos. It could be overridden via tonistiigi/binfmt (but I've not yet tested that). For what it's worth, it's not possible to just add a simple wrapper to qemu fix this either, as seen in this Github Example, and that's due to the Linux Kernel Binfmt Interface. The 'F' flag forces the kernel to store a handle to the specified emulator, and make it available in chroot and Docker contexts, and that doesn't help if it's only the wrapper it grabs and not the emulator binary. Similarly, it's not possible to pass additional flags via this interface, so without a change to the qemu binary, the QEMU_CPU environment variable may be the only way to work around this immediately. The other curious bit is what does QEMU_CPU=cortex-a53 enable to recover qemu speed? The answer is absolutely nothing. it just turns CPU features off. At a glance, I'm not sure if that something qemu is doing indirectly, or glibc conditionally enables at runtime. If anyone knows better than I here, please drop a comment. The curious can check can via: ~ $ docker run --rm -it --platform linux/arm64 ubuntu:noble cat /proc/cpuinfo processor : 0 model name : ARMv8 Processor rev 0 (v8l) BogoMIPS : 100.00 Features : fp asimd aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm sb paca pacg dcpodp sve2 sveaes svepmull svebitperm svesha3 svesm4 flagm2 frint svei8mm svef32mm svef64mm svebf16 i8mm bf16 rng bti mte mte3 sme smei16i64 smef64f64 smei8i32 smef16f32 smeb16f32 smef32f32 smefa64 mops hbc CPU implementer : 0x00 CPU architecture: 8 CPU variant : 0x0 CPU part : 0x051 CPU revision : 0 ~ $ docker run --rm -it --platform linux/arm64 --env QEMU_CPU=cortex-a53 ubuntu:noble cat /proc/cpuinfo processor : 0 model name : ARMv8 Processor rev 4 (v8l) BogoMIPS : 100.00 Features : fp asimd aes pmull sha1 sha2 crc32 cpuid CPU implementer : 0x41 CPU architecture: 8 CPU variant : 0x0 CPU part : 0xd03 CPU revision : 4
-
How you can help test upcoming Armbian 26.02 images?
tparys replied to Igor's topic in Advanced users - Development
Board: ODroid C4 Image: Armbian_26.2.1_Odroidc4_noble_current_6.18.8_xfce_desktop.img.xz Tested OK: - Installation to SD card - Boot to desktop, login - OpenGL / Panfrost - USB 2.0 - Ethernet Not tested: - Audio via HDMI (Though alsamixer reports there's stuff there) If anything, the serial console seems rather quiet. But attached ... minicom.cap -
How you can help test upcoming Armbian 26.02 images?
tparys replied to Igor's topic in Advanced users - Development
I have two boards I can test. ODroid C4 and NanoPi M4v2. I'll pull down the C4 build and give it a go. I don't see a build for the M4v2? (EDIT - Doesn't look like the Github action failed) -
Ever since I read through THIS POST a while back, I started digging through the cpufrequtils init script, and was a more or less disappointed with what I found. It's largely a product of the CPUs that were available around that time (eg - Pentium 3). Namely that all cores in the system are the same, and there's should be only one master policy that controls everything. Of course ARM big.LITTLE totally breaks these assumptions, and leaves the script with no viable way to specify different schedulers or frequency ranges for each CPU cluster. It still runs, but not really correctly. For example, you can't set the RK3399 little cores above 1.4 GHz, but that's basically what it attempts to do on every boot. Also it needs use "cpufreq-set" to do it's job, which seems too hard when the sysfs interface is already pretty simple. Really, that extra complexity isn't buying you a single thing. Maybe it made more sense 10-15 years ago. So I took a crack at doing a far simpler, stupid version of that script (on perhaps smarter depending on perspective). It can generate it's own config, and I think that it comes across much more readable and accessible. ### # CPUFreq policy for CPUs: 0 1 2 3 # # CPU Frequency Governor # Avail: conservative ondemand userspace powersave performance schedutil POLICY0_GOVERNOR=ondemand # Min/Max Frequency Selection # Avail: 408000 600000 816000 1008000 1200000 1416000 POLICY0_MIN_FREQ=408000 POLICY0_MAX_FREQ=1416000 ### # CPUFreq policy for CPUs: 4 5 # # CPU Frequency Governor # Avail: conservative ondemand userspace powersave performance schedutil POLICY4_GOVERNOR=conservative # Min/Max Frequency Selection # Avail: 408000 600000 816000 1008000 1200000 1416000 1608000 1800000 POLICY4_MIN_FREQ=408000 POLICY4_MAX_FREQ=1800000 And using it isn't too hard ... cp cpufrequtils-bl /etc/init.d/ sudo /etc/init.d/cpufrequtils-bl save sudo vi /etc/default/cpufrequtils-bl sudo systemctl disable cpufrequtils sudo systemctl enable cpufrequtils-bl I know that this is probably pretty far down on the list of priorities, but does anyone thing that this would be useful to others? cpufrequtils-bl
