

amazingfate
-
Posts
100 -
Joined
-
Last visited
Content Type
Forums
Store
Crowdfunding
Applications
Events
Raffles
Community Map
Posts posted by amazingfate
-
-
3 hours ago, tanod said:
but cannot find Armbian jammy ISO for my Orange Pi 5 Plus
Armbian doesn't release this old LTS release now, but you can build one from github.com/armbian/build
3 hours ago, tanod said:I already tried the flatpak version on my current noble but even though It runs fine on Wayland it says no HW acc and then I get stuck trying to have it working. Is it going to have HW acc if i succeed to build it?
Yes, flatpak version is using ffmpeg in its own sandbox without rkmpp support, if you build the deb locally, it will get linked with ffmpeg on system which has rkmpp support.
-
4 hours ago, tanod said:
apt policy moonlight-qt N: Packet moonlight-qt cannot be found
There is no moonlight-qt packaged in this ppa for noble, you can build from source: https://salsa.debian.org/amazingfate/moonlight-qt
-
If you are talking about lag with discord webpage, vpu accel won't help because there is no video played in discord. You can check chromium's gpu status at chrome://gpu, see if the gpu driver is working.
On 6/14/2025 at 12:31 PM, Werner said:I think @amazingfate made a custom Chromium package at some point that included optimizations for rk35xx but no clue about its status. Might be even included in mesa-vpu extension?
My ppa only packages chromium for noble, ubuntu 25.04 is not supported. While chromium in my ppa only add patches to let chromium use vpu when playing videos, this should not help improving discord webpage performance.
-
Hi,
There are differences between the bookworm and noble images:
- bookworm is running vendor 6.1.99 kernel which is from rockchip's sdk, while noble is running mainline 6.12.13. The SDK kernel may have issues detecting screen resolution, you may wait for a new version 6.1.115 to see if the issue is fixed.
- bookworm should be running without gpu driver, you can check it by command "dpkg -l|grep libgl1-mesa-dri", if the version is lower than 24.1, the gpu is not drived, you can try bookworm image built with mesa-vpu extension for example the cinnamon one: https://dl.armbian.com/bananapim7/Bookworm_current_cinnamon-backported-mesa
- You are running both images with xfce desktop, so the scaling issue may be desktop environment related, usually wayland session(such as gnome wayland) has better support on HiDPI screens.
-
I have created a fix pr: https://github.com/radxa-pkg/aic8800/pull/27
You can download debs from workflow artifacts: https://github.com/radxa-pkg/aic8800/actions/runs/13725535037
-
-
Quote
How does this relate to Armbian build for vendor kernel? Does it need to be patched or will move by some workflow?
This commit should be in the source tree if you are using the latest code. I have no idea why the issue happens again. I will check it later.
-
This is a dkms related issue when BTF is enable. This commit should have fixed it:
https://github.com/armbian/linux-rockchip/commit/7294c3d056ca52ba312f1b7cdc5cd2273f7d9766
-
After 20min of fio test, nvme disk is still fine
$ sudo fio --name=randwrite --ioengine=sync --rw=write --bs=4k --numjobs=1 --size=1G --runtime=1200s --time_based --directory=/mnt
[sudo] password for jfliu:
randwrite: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=sync, iodepth=1
fio-3.36
Starting 1 process
Jobs: 1 (f=1): [W(1)][93.4%][w=93.4MiB/s][w=23.9k IOPS][eta 01m:19s]Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]
randwrite: (groupid=0, jobs=1): err= 0: pid=5569: Wed Dec 11 10:26:57 2024
write: IOPS=49.7k, BW=194MiB/s (203MB/s)(227GiB/1200478msec); 0 zone resets
clat (nsec): min=1166, max=3770.1M, avg=18646.17, stdev=2441229.11
lat (nsec): min=1458, max=3770.1M, avg=18735.33, stdev=2441230.02
clat percentiles (nsec):
| 1.00th=[ 1464], 5.00th=[ 1752], 10.00th=[ 2040],
| 20.00th=[ 2040], 30.00th=[ 2320], 40.00th=[ 2640],
| 50.00th=[ 2640], 60.00th=[ 2928], 70.00th=[ 3216],
| 80.00th=[ 3216], 90.00th=[ 4080], 95.00th=[ 4640],
| 99.00th=[ 7904], 99.50th=[ 10560], 99.90th=[8716288],
| 99.95th=[9109504], 99.99th=[9240576]
bw ( KiB/s): min= 24, max=1543457, per=100.00%, avg=227861.36, stdev=262064.21, samples=2093
iops : min= 6, max=385864, avg=56965.30, stdev=65516.07, samples=2093
lat (usec) : 2=7.53%, 4=80.62%, 10=11.27%, 20=0.41%, 50=0.03%
lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
lat (msec) : 2=0.01%, 4=0.01%, 10=0.13%, 20=0.01%, 50=0.01%
lat (msec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
lat (msec) : 2000=0.01%, >=2000=0.01%
cpu : usr=3.12%, sys=17.15%, ctx=251826, majf=0, minf=12
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,59630438,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1Run status group 0 (all jobs):
WRITE: bw=194MiB/s (203MB/s), 194MiB/s-194MiB/s (203MB/s-203MB/s), io=227GiB (244GB), run=1200478-1200478msecDisk stats (read/write):
nvme0n1: ios=0/265014, sectors=0/414735560, merge=0/734, ticks=0/1115350653, in_queue=1115646197, util=98.45% -
I'm having a v1.1 board.
Now I boot a noble cli image with 6.1.84 vendor kernel.
120s fio test is fine:
$ sudo fio --name=randwrite --ioengine=sync --rw=write --bs=4k --numjobs=1 --size=1G --runtime=120s --time_based --directory=/mnt
randwrite: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=sync, iodepth=1
fio-3.36
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][w=1079MiB/s][w=276k IOPS][eta 00m:00s]
randwrite: (groupid=0, jobs=1): err= 0: pid=4531: Wed Dec 11 01:49:08 2024
write: IOPS=229k, BW=895MiB/s (939MB/s)(105GiB/120116msec); 0 zone resets
clat (nsec): min=1166, max=1465.6M, avg=3015.61, stdev=345679.23
lat (nsec): min=1458, max=1465.6M, avg=3105.60, stdev=345687.63
clat percentiles (nsec):
| 1.00th=[ 1752], 5.00th=[ 1752], 10.00th=[ 1752], 20.00th=[ 2040],
| 30.00th=[ 2040], 40.00th=[ 2040], 50.00th=[ 2040], 60.00th=[ 2320],
| 70.00th=[ 3216], 80.00th=[ 3216], 90.00th=[ 3504], 95.00th=[ 4080],
| 99.00th=[ 6112], 99.50th=[ 7584], 99.90th=[ 11072], 99.95th=[ 13376],
| 99.99th=[749568]
bw ( KiB/s): min=49304, max=1457152, per=100.00%, avg=929145.40, stdev=313692.78, samples=236
iops : min=12326, max=364288, avg=232286.35, stdev=78423.29, samples=236
lat (usec) : 2=15.55%, 4=79.07%, 10=5.21%, 20=0.15%, 50=0.01%
lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
lat (msec) : 100=0.01%, 2000=0.01%
cpu : usr=15.40%, sys=77.45%, ctx=52410, majf=0, minf=22
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,27525121,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1Run status group 0 (all jobs):
WRITE: bw=895MiB/s (939MB/s), 895MiB/s-895MiB/s (939MB/s-939MB/s), io=105GiB (113GB), run=120116-120116msecDisk stats (read/write):
nvme0n1: ios=1/113583, sectors=8/216264040, merge=0/248, ticks=9/45093762, in_queue=45095689, util=90.05%I'm testing with a 256G NVME SSD.
-
I'm sorry that the image I downloaed from https://www.armbian.com/armsom-sige7/ is running mainline kernel 6.12.1. I did not check the 6.1 vendor kernel.
My board is at home and I don't remember the board version. I will test with 6.1 kernel again when I get home today.
BTW what kind of NVME SSD you are using?
-
Fixed by upstream 131.0.6778.108-1~deb12u1:
-
BPi M7 is produced by armsom, and armsom is invested by BPi so there is a BPi logo on board.
-
I don't use my nvme disk as root partition, so I use fio to test the io performance:
$ sudo fio --name=randwrite --ioengine=sync --rw=write --bs=4k --numjobs=1 --size=1G --runtime=1200s --time_based --directory=/mnt
randwrite: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=sync, iodepth=1
fio-3.36
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]
randwrite: (groupid=0, jobs=1): err= 0: pid=4208: Tue Dec 10 00:35:12 2024
write: IOPS=47.6k, BW=186MiB/s (195MB/s)(219GiB/1205176msec); 0 zone resets
clat (nsec): min=875, max=3507.7M, avg=3209.46, stdev=612372.83
lat (nsec): min=1166, max=3507.7M, avg=3291.55, stdev=612373.49
clat percentiles (nsec):
| 1.00th=[ 1464], 5.00th=[ 1752], 10.00th=[ 2040], 20.00th=[ 2040],
| 30.00th=[ 2040], 40.00th=[ 2320], 50.00th=[ 2320], 60.00th=[ 2320],
| 70.00th=[ 2320], 80.00th=[ 2320], 90.00th=[ 2640], 95.00th=[ 3504],
| 99.00th=[ 5280], 99.50th=[ 5536], 99.90th=[ 9664], 99.95th=[53504],
| 99.99th=[91648]
bw ( KiB/s): min= 384, max=1467567, per=100.00%, avg=734206.83, stdev=482602.67, samples=626
iops : min= 96, max=366891, avg=183551.56, stdev=120650.66, samples=626
lat (nsec) : 1000=0.01%
lat (usec) : 2=9.80%, 4=86.66%, 10=3.45%, 20=0.03%, 50=0.01%
lat (usec) : 100=0.04%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
lat (msec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
lat (msec) : 2000=0.01%, >=2000=0.01%
cpu : usr=2.85%, sys=14.76%, ctx=193337, majf=0, minf=21
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,57409537,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1Run status group 0 (all jobs):
WRITE: bw=186MiB/s (195MB/s), 186MiB/s-186MiB/s (195MB/s-195MB/s), io=219GiB (235GB), run=1205176-1205176msecDisk stats (read/write):
nvme0n1: ios=0/234874, sectors=0/457824096, merge=0/198, ticks=0/281679979, in_queue=281774755, util=92.39% -
Just checked ubuntu noble image from https://www.armbian.com/armsom-sige7/, which has 6.1.84 kernel. nvme disk is detected.
I did not do a stress test on it. Simple mounting partition, writing some data one it is fine.
-
Armbian is using zram to configure swap by default, so it has no swap partitrion in its image. I you want separate swap partition you have to install the system on your own, by rsync the whole system to parted NVME root patition, and write correct root partition UUID to /boot/armbianEnv.txt, and config swap in /etc/fstab.
-
I don't know why the man from kodi recommend opengl instead if opengles, it's arm platform, opengl won't work.
-
Hi, there is a simple way to install armbian to NVME SSD.
1, Boot armbian from sd card
2, Burn u-boot to emmc:
You can find u-boot firmware at
$ ls /usr/lib/linux-u-boot-*/
idbloader.img rkspi_loader.img u-boot.itbUse dd command to flash u-boot:
dd if=./idbloader.img of=/dev/mmcblk0 seek=64 conv=notrunc status=none
dd if=./u-boot.itb of=/dev/mmcblk0 seek=16384 conv=notrunc status=none3, Burn armbian image to NVME SSD, you can use dd in the armbian system on sd card with command:
sudo dd if=./Armbian.img of=/dev/nvme0n1 bs=1M status=progress
or use application like balenaEtcher on windows to flash image to SSD.
4, Unplug sd card and plug NVME SSD, now you should be able to boot armbian system on NVME SSD.
-
-
What's the serial log when board is not booting?
I never use armbian-config to switch kernel. I just use apt to install linux-dtb and linux-image packages and make sure apt runs fine.
-
4.3.1 moonlight is linked to ffmpeg 4.4, while new moonlight is linked to ffmpeg 6 because libavcode-dev in my ppa is updated to 6.0.
Decoding time caculation is ffmpeg releated. I think moonlight can't distinguish decoding time and rendering time with new ffmpeg.
-
Decoding time is ok if there is no frame drop.
-
eMMC is also okay.
-
I just downloaded `Armbian_24.5.1_Rock-5c_noble_vendor_6.1.43.img.xz` and flashed it to tf card and my rock5c lite can boot with hdmi output, no issues.
We are ready to offer a Bountysource donation to Armbian
in Orange Pi 5
Posted
These errors are fine because moonlight will try all the hw accel APIs like vaapi, vdpau and vulkan, but on rockchip platform we only need rkmpp decoder, as you can see in the log it is detected,