Jump to content

David Pottage

  • Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I have a RockPro64 that I have been using for many years as a server. It is currently running Armbian bookworm 12.5. Most of the storage is on an M.2 SSD attached to the PCI socket and is subdivided into disk volumes using linux LVM2. Today I accepted the Armbian kernel upgrade from 6.1.50 to 6.6.16 During the reboot it failed to mount any of my LVM volumes because the device nodes where not present. I found this article about how to recover the situation: https://unix.stackexchange.com/questions/11125/lvm-devices-under-dev-mapper-missing While in rescue mode I ran the suggested command vgchange -a y <name of volume group> This re-created the lvm devices under /dev/mapper so I was able to mount the volumes, but the newly created devices where not persisted so I got the same problem again when I attempted to reboot. I have been able to get my system booting by adding "nofail" to each /etc/fstab line and then after boot running the vgchange command to create the devices so that the system can boot, but clearly this is not a wise long term solution. It looks like something has been missed out of the early boot commands or the intrid, but I don't know enough to fully debug the problem. Any ideas?
  2. I investigated, and found that many nightly builds are identical other than the version number. Using a recent range of version numbers still cached in my board's filling system: 468 469 473 474 475 477 480 484 490 491 503 516 517 520 525 527 I extracted all the package control files: for ver in 468 469 473 474 475 477 480 484 490 491 503 516 517 520 525 527 ; do mkdir -p /tmp/extract/24.2.0-trunk.${ver}_arm64 ; dpkg-deb -e /var/cache/apt/archives/linux-image-edge-rockchip-rk3588_24.2.0-trunk.${ver}_arm64.deb /tmp/extract/24.2.0-trunk.${ver}_arm64 ; done See also: https://forums.debian.net/viewtopic.php?t=60431 Those version numbers cover a date range from 27th Jan to 5th Feb I then got the checksum of each md5sums file to see when it changed: root@orangepi5:/tmp# sha1sum /tmp/extract/*/md5sums b095c3719b4682c6f7f2d9809effeb77bb5f9a20 /tmp/extract/24.2.0-trunk.468_arm64/md5sums b095c3719b4682c6f7f2d9809effeb77bb5f9a20 /tmp/extract/24.2.0-trunk.469_arm64/md5sums b095c3719b4682c6f7f2d9809effeb77bb5f9a20 /tmp/extract/24.2.0-trunk.473_arm64/md5sums b095c3719b4682c6f7f2d9809effeb77bb5f9a20 /tmp/extract/24.2.0-trunk.474_arm64/md5sums b095c3719b4682c6f7f2d9809effeb77bb5f9a20 /tmp/extract/24.2.0-trunk.475_arm64/md5sums b095c3719b4682c6f7f2d9809effeb77bb5f9a20 /tmp/extract/24.2.0-trunk.477_arm64/md5sums b095c3719b4682c6f7f2d9809effeb77bb5f9a20 /tmp/extract/24.2.0-trunk.480_arm64/md5sums b095c3719b4682c6f7f2d9809effeb77bb5f9a20 /tmp/extract/24.2.0-trunk.484_arm64/md5sums b095c3719b4682c6f7f2d9809effeb77bb5f9a20 /tmp/extract/24.2.0-trunk.490_arm64/md5sums b095c3719b4682c6f7f2d9809effeb77bb5f9a20 /tmp/extract/24.2.0-trunk.491_arm64/md5sums 64ac8a009665064bbc877777f9f1a0e1d847fcf8 /tmp/extract/24.2.0-trunk.503_arm64/md5sums 64ac8a009665064bbc877777f9f1a0e1d847fcf8 /tmp/extract/24.2.0-trunk.516_arm64/md5sums 64ac8a009665064bbc877777f9f1a0e1d847fcf8 /tmp/extract/24.2.0-trunk.517_arm64/md5sums 64ac8a009665064bbc877777f9f1a0e1d847fcf8 /tmp/extract/24.2.0-trunk.520_arm64/md5sums 64ac8a009665064bbc877777f9f1a0e1d847fcf8 /tmp/extract/24.2.0-trunk.525_arm64/md5sums 64ac8a009665064bbc877777f9f1a0e1d847fcf8 /tmp/extract/24.2.0-trunk.527_arm64/md5sums So from this we can see that the contents of the package only changed once over that time period. Other than that the only difference between package versions is the version number itself in the DEBIAN/control file. Given this, would it make sense to add a step to the build and publish scripts to skip publishing a new version of a package if it's contents are identical to the previous version? I am working on Debian packaging scripts for work anyway so it would not be hard.
  3. I have noticed that the 6.8 kernel on my OrangePi5 updates more than once per day (via the Debian package apt update mechanism) 1. The version number of the kernel still reports as RC1, even though kernel.org released RC2 more than a week ago, and RC3 a few days back. Does the Armbain build system need a PR to a config file to update the upstream kernel that it pulls and builds. root@orangepi5:~# uname -a Linux orangepi5 6.8.0-rc1-edge-rockchip-rk3588 #2 SMP PREEMPT Sun Jan 21 22:11:32 UTC 2024 aarch64 GNU/Linux 2. Are all the kernel releases actually different? There appears to be a huge amount of churn.
  4. I just found a kernel manline status page (For another rk3588 board) and it shows the random number generator as TODO https://gitlab.collabora.com/hardware-enablement/rockchip-3588/notes-for-rockchip-3588/-/blob/main/mainline-status.md#:~:text=Random Number Generator So I guess for now the OrangePi5 will be starved of entropy during early bootup, and won't be able to generate ssh host keys.
  5. I found an nVME M.2 drive to fit the socket on the underside of the board: WD PC SN740 SDDPTQD-512G root@orangepi5:~# smartctl -a /dev/nvme0n1 smartctl 7.4 2023-08-01 r5530 [aarch64-linux-6.8.0-rc1-edge-rockchip-rk3588] (local build) Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Number: WD PC SN740 SDDPTQD-512G Serial Number: 23176Q443121 Firmware Version: 73110000 PCI Vendor/Subsystem ID: 0x15b7 IEEE OUI Identifier: 0x001b44 Total NVM Capacity: 512,110,190,592 [512 GB] Unallocated NVM Capacity: 0 Controller ID: 0 NVMe Version: 1.4 Number of Namespaces: 1 Namespace 1 Size/Capacity: 512,110,190,592 [512 GB] Namespace 1 Formatted LBA Size: 512 Namespace 1 IEEE EUI-64: 001b44 4a48c68698 Local Time is: Tue Jan 30 07:25:18 2024 GMT Firmware Updates (0x14): 2 Slots, no Reset required Optional Admin Commands (0x0017): Security Format Frmw_DL Self_Test Optional NVM Commands (0x00df): Comp Wr_Unc DS_Mngmt Wr_Zero Sav/Sel_Feat Timestmp Verify Log Page Attributes (0x7e): Cmd_Eff_Lg Ext_Get_Lg Telmtry_Lg Pers_Ev_Lg Log0_FISE_MI Telmtry_Ar_4 Maximum Data Transfer Size: 256 Pages Warning Comp. Temp. Threshold: 84 Celsius Critical Comp. Temp. Threshold: 88 Celsius Namespace 1 Features (0x02): NA_Fields Supported Power States St Op Max Active Idle RL RT WL WT Ent_Lat Ex_Lat 0 + 4.70W 4.30W - 0 0 0 0 0 0 1 + 3.00W 3.00W - 0 0 0 0 0 0 2 + 2.20W 2.00W - 0 0 0 0 0 0 3 - 0.0150W - - 3 3 3 3 1500 2500 4 - 0.0050W - - 4 4 4 4 10000 6000 5 - 0.0033W - - 5 5 5 5 176000 25000 Supported LBA Sizes (NSID 0x1) Id Fmt Data Metadt Rel_Perf 0 + 512 0 2 1 - 4096 0 1 root@orangepi5:~# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=/mnt/nvme0n1p1/test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 fio-3.36 Starting 1 process test: Laying out IO file (1 file / 4096MiB) Jobs: 1 (f=1): [m(1)][100.0%][r=329MiB/s,w=109MiB/s][r=84.3k,w=27.9k IOPS][eta 00m:00s] test: (groupid=0, jobs=1): err= 0: pid=3556: Tue Jan 30 07:20:48 2024 read: IOPS=80.5k, BW=314MiB/s (330MB/s)(3070MiB/9764msec) bw ( KiB/s): min=266528, max=339688, per=100.00%, avg=322661.47, stdev=21780.95, samples=19 iops : min=66632, max=84920, avg=80665.16, stdev=5445.10, samples=19 write: IOPS=26.9k, BW=105MiB/s (110MB/s)(1026MiB/9764msec); 0 zone resets bw ( KiB/s): min=88128, max=115320, per=100.00%, avg=107853.47, stdev=7553.47, samples=19 iops : min=22032, max=28830, avg=26963.37, stdev=1888.37, samples=19 cpu : usr=9.34%, sys=39.17%, ctx=134205, majf=0, minf=14 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% issued rwts: total=785920,262656,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=64 Run status group 0 (all jobs): READ: bw=314MiB/s (330MB/s), 314MiB/s-314MiB/s (330MB/s-330MB/s), io=3070MiB (3219MB), run=9764-9764msec WRITE: bw=105MiB/s (110MB/s), 105MiB/s-105MiB/s (110MB/s-110MB/s), io=1026MiB (1076MB), run=9764-9764msec Disk stats (read/write): nvme0n1: ios=767663/256616, sectors=6141304/2052936, merge=0/1, ticks=451440/50138, in_queue=501578, util=99.12%
  6. Yes, I spotted that as well. I found my USB TTL adaptor and successfully logged in over serial, but it took effort to get ssh working. That was one of the issues, the other was that I had messed up the systemd unit files with my attempts to edit them on the MicroSD card. If I get the chance I will see if I can find where the bug is. I suspect that there might be a lack of entropy during that early boot so the host key creation times out.
  7. I have booted my Orange pi5 with a recent Trixe nightly image. It is responding to pings, but I can't ssh into it. Is there a way to enable ssh access by mounting the SD card from another Linux PC and editing config files? I have a USB TTL adaptor but I have mislaid it, so if there is a short cut to turning on ssh, then I would prefer to do that. I promise I will change the root password 😅
  8. I was able to restore my system to a working state, by downgrading to the old kernel versions. I think that there is probably a bug or missing command in the postinst scripts for newer armbain kernels. For anyone landing here via an internet search in future, the steps where: Find the upgrade command that installed the bad kernel. zless /var/log/apt/history.log.1.gz eg: Upgrade: armbian-config:arm64 (23.8.3, 23.11.1), linux-u-boot-rockpro64-current:arm64 (23.8.1, 23.11.1), linux-dtb-current-rockchip64:arm64 (23.8.1, 23.11.1), linux-image-current-rockchip64:arm64 (23.8.1, 23.11.1), armbian-firmware:arm64 (23.8.3, 23.11.1) Note that it shows both the old and new versions. Removed the beta apt source nano apt/sources.list.d/armbian.list Used apt to downgrade the offending packages using <name>=<version> syntax eg: apt install armbian-config=23.8.3 linux-u-boot-rockpro64-current=23.8.1 linux-dtb-current-rockchip64=23.8.1 linux-image-current-rockchip64=23.8.1 armbian-firmware=23.8.3 Checked that symlinks in /boot are consistent. ls -l /boot/ In my case dtb, Image and uInitrd all pointed to 6.1.50 versions Crossed fingers, rebooted. The other thing that made it possible to resolve this was a serial console adaptor. Without one it would have been very much harder to find the problem in the first place.
  9. Thanks @Werner - I have found a copy of the armbianmonitor script and used it to upload diagnostics.
  10. NB: armbianmonitor did not work. No such package, and nothing that I can find to install.
  11. I have recently accepted the 23.11.1 armbain kernel release on my RockPro 64 The update has not rebuilt or updated the uIntrd for the 6.1.63 kernel. Note that the /boot/uInitrd symlink still points to the old uInitrd version but the other two symlinks have updated. root@jupiter:~# ls -l /boot/dtb /boot/Image /boot/uInitrd lrwxrwxrwx 1 root root 29 Dec 2 20:38 /boot/dtb -> dtb-6.1.63-current-rockchip64 lrwxrwxrwx 1 root root 33 Dec 2 20:38 /boot/Image -> vmlinuz-6.1.63-current-rockchip64 lrwxrwxrwx 1 root root 33 Nov 30 18:04 /boot/uInitrd -> uInitrd-6.1.50-current-rockchip64 root@jupiter:~# ls -l /boot/uInitrd* lrwxrwxrwx 1 root root 33 Nov 30 18:04 /boot/uInitrd -> uInitrd-6.1.50-current-rockchip64 -rw-r--r-- 1 root root 17777215 Jul 22 21:07 /boot/uInitrd-5.15.93-rockchip64 -rw-r--r-- 1 root root 17560966 Nov 30 12:06 /boot/uInitrd-6.1.50-current-rockchip64 The first time after the update I did not notice before rebooting, and the box failed to boot properly. I got to a rescue console and was able to restore the three 6.1.50 kernel files from a backup, and restore the symlinks. I tried running: apt reinstall armbian-config armbian-firmware linux-dtb-current-rockchip64 linux-image-current-rockchip64 linux-u-boot-rockpro64-current base-files But this did not fix the problem. Google is not helping me find the command to create the 6.1.63 uInitrd file by hand. I have also tried installing the 6.1.64 kernel from beta.armbian.com but that had the same issue. Currently my device is running 6.1.50-current-rockchip64 but the 6.1.64-current-rockchip64 kernel is installed and I suspect that there are inconsistencies in how things are setup. For example wireguard is not working correctly. How can I restore my device back to a stable working state?
  12. How do I firmware packages via armbian-config? I can't see that option in the curses menus. Where is it? Thanks.
  13. We have all seen the recent release announcement of Armbian 21.08 which now includes a beta release of Debian 11 (Bullseye). I have a RockPro64 (Rockchip 3399) that is currently running Debian 10 (Buster) that I would like to upgrade, so that it gets the latest packages from Debian and continues to receive security patches. I am tempted to just upgrade it by editing /etc/apt/sources.list and pulling the latest packages (the debian way), but I wondered if there is a safer way to perform the upgrade. Has anyone else upgraded a Debian based Armbain system that way. Are there any issues to watch out for? What customisations are there in Armbian besides the kernel? How easy would it be to apply those changes after the upgrade? Thanks.
  14. Thanks for that tip. It worked to boot my old 5.4.49 kernel. I then updated the symlinks in /boot to boot the new 5.8.6 kernel, and the boot failed as before. I had saved a copy of the original /boot directory (from the released 20.08 release), I did some more investigation and did a diff between the failing /boot directory and the copy I took. I noticed a difference in /boot/boot.cmd # diff boot/boot.cmd OLD_boot/boot.cmd 6c6 < setenv load_addr "0x9000000" --- > setenv load_addr "0x39000000" 10c10 < setenv verbosity "1" --- > setenv verbosity "7" 12d11 < setenv bootlogo "false" 15c14 < setenv earlycon "off" --- > setenv earlycon "on" 29c28,29 < if test "${bootlogo}" = "true"; then setenv consoleargs "bootsplash.bootfile=bootsplash.armbian ${consoleargs}"; fi --- > # if test "${earlycon}" = "on"; then setenv consoleargs "earlycon=uart,mmio,0xFF1A0000,1500000 ${consoleargs}"; fi > # 2: uart:16550A mmio:0xFF1A0000 irq:38 tx:51 rx:0 RTS|DTR Could the difference in load_addr be the cause of the problems? I have attached the boot log from the successful boots of each kernel. Sucessful kernel 5.8 Jupiter system boot from microSD.txt Sucessful kernel 5.4.49 Jupiter system boot from microSD.txt
  15. I tried booting from my SD card and it did not work. I started with booting the old 5.4.49 kernel, but even that failed. The first time, I reformatted my SD card with an empty ext4 filesystem, and used rsync to copy everything from the eMMC card root file system. When I attempted to boot the 5.4.49 kernel from it, it failed and could not find the root filesystem. I then thought that there might be some magic in the way the files where laid out, so I used GParted to make a binary copy of the eMMC card root file system to the SD card. When that booted, it went into a bootloop with "Synchronous Abort" handler, esr 0x02000000 messages. I have since looked at the logs from GParted, and noticed that it used e2image instead of dd to copy the partion. Could that command have left out something important? Are there any boot blocks or clever bits of file system layout that are used to boot the RockPro64? It looks like a simple copy does not create something bootable I have attached the serial port output from both boot attempts. Boot reset loop.txt Verbose 9 Boot from uSD failed no root FS 5.4.49 kernel.txt
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines