Gareth Halfacree
-
Posts
34 -
Joined
-
Last visited
Content Type
Forums
Store
Crowdfunding
Applications
Events
Raffles
Community Map
Posts posted by Gareth Halfacree
-
-
3 hours ago, Heisath said:
Something that's probably gonna keep happening if nobody is using the beta repository (nightly/edge) and reporting errors before they make it to the current version.
Oh, I've tried reporting problems. Igor told me (and many, many others) that if I wasn't paying €50 a month to Armbian he wasn't interested, so I stopped.
-
49 minutes ago, Schroedingers Cat said:
Any idea what's going on? Is the built-in storage dying?
No, the storage is fine - Armbian 21.08.1 is yet another update that hoses something on the Helios64. Last time it was the fans, the time before that the 2.5Gb Ethernet port (twice), and this time it's the eMMC.
You'll need to follow the instructions upthread to downgrade the kernel to get the eMMC working again. Then cross your fingers it gets fixed at some point in the future.
-
If anyone has installed the update but *not* rebooted, it's a quick (temporary) fix:
sudo apt install linux-dtb-current-rockchip64=21.05.4 linux-headers-current-rockchip64=21.05.4 linux-image-current-rockchip64=21.05.4
-
Another month, another big batch of DRDY errors as the btrfs scrub runs.
-
4 hours ago, gprovost said:
The issue that might be related to SATA controller seems to only show up with BTRFS and ZFS while doing some scrubbing. Do you guys have a suggestion for a simple reproducible test starting from scratch (no data on HDD) ? Thanks for your help.
I'd suggest getting one or more drives, setting them up in a btrfs mirror, dding a bunch of random onto them, and scrubbing. It's possible dding alone will trigger the error, but if it doesn't then scrubbing certainly should.
$ sudo mkfs.btrfs -L testdisks -m raid1 -d raid1 /dev/sdX /dev/sdY /dev/sdZ $ sudo mkdir /media/testdisks $ sudo chown USER /media/testdisks $ sudo mount /dev/sdX /media/testdisks -o user,defaults,noatime,nodiratime,compress=zstd,space_cache $ cd /media/testdisks $ dd if=/dev/urandom of=/media/testdisks/001.testfile bs=10M count=1024 $ for i in {002..100}; do cp /media/testdisks/001.testfile /media/testdisks/$i.testfile; done $ sudo btrfs scrub start -Bd /media/testdisks
The options on the mount are set to mimic my own. Obviously change the number and name of the devices and the username to match your own. It's also set to create 100 files of 10GB each for a total of 1TB of data; you could try with 10 files for 100GB total first to speed things up, but I've around a terabyte of data on my system so figured it was worth matching the real-world scenario as closely as possible.
-
It's the first of the month, so another automated btrfs scrub took place - and again triggered a bunch of DRDY drive resets on ata2.00. Are we any closer to figuring out what the problem is, here?
-
For anyone visiting from Twitter and wondering what the fuss is about, you can find the split thread here.
-
The key upgrade for me would be moving away from Armbian to a professional distribution like mainstream Ubuntu or Debian.
-
-
@Igor I've seen you post the same copy-and-paste message to others, and I remain as unimpressed now as the first time I saw it. I'm not asking you, nor Armbian, for support, here: I'm asking Kobol, to whom I have paid money for a commercial product with support, to assist me. If you want to help me, feel free; if not, kindly leave Kobol and its appointed representatives to the task.
I'll be honest: if I'd know about your user-hostile attitude before pre-ordering the Helios64, I wouldn't have ordered it at all - I'd have bought something that ran plain-Jane Ubuntu instead. It's certainly put me off using Armbian in the future, and I'm only still here because I don't want the money I've spent on the Helios64 to be wasted.
@gprovost Thanks for offering your assistance. Here's the output:
Quote~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop1 7:1 0 272K 1 loop /snap/bpytop/199
loop2 7:2 0 57.2M 1 loop /snap/core20/907
loop3 7:3 0 57.2M 1 loop /snap/core20/877
loop4 7:4 0 27M 1 loop /snap/snapd/11043
loop5 7:5 0 280K 1 loop /snap/bpytop/203
loop6 7:6 0 27M 1 loop /snap/snapd/10709
sda 8:0 0 223.6G 0 disk
└─sda1 8:1 0 223.6G 0 part /
sdc 8:32 0 5.5T 0 disk /media/mirror1
sdd 8:48 0 5.5T 0 disk
mmcblk1 179:0 0 14.6G 0 disk
└─mmcblk1p1 179:1 0 14.4G 0 part /media/mmcboot
mmcblk1boot0 179:32 0 4M 1 disk
mmcblk1boot1 179:64 0 4M 1 disk
zram0 251:0 0 1.9G 0 disk [SWAP]
zram1 251:1 0 500M 0 disk /var/log~$ cat /proc/mounts
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
udev /dev devtmpfs rw,nosuid,noexec,relatime,size=1869836k,nr_inodes=467459,mode=755 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,nodev,noexec,relatime,size=389116k,mode=755 0 0
/dev/sda1 / ext4 rw,noatime,nodiratime,errors=remount-ro,commit=600 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
cgroup2 /sys/fs/cgroup/unified cgroup2 rw,nosuid,nodev,noexec,relatime,nsdelegate 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,name=systemd 0 0
pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
none /sys/fs/bpf bpf rw,nosuid,nodev,noexec,relatime,mode=700 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/rdma cgroup rw,nosuid,nodev,noexec,relatime,rdma 0 0
systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=28,pgrp=1,timeout=0,minproto=5,maxproto=5,direct 0 0
hugetlbfs /dev/hugepages hugetlbfs rw,relatime,pagesize=2M 0 0
mqueue /dev/mqueue mqueue rw,nosuid,nodev,noexec,relatime 0 0
debugfs /sys/kernel/debug debugfs rw,nosuid,nodev,noexec,relatime 0 0
tracefs /sys/kernel/tracing tracefs rw,nosuid,nodev,noexec,relatime 0 0
fusectl /sys/fs/fuse/connections fusectl rw,nosuid,nodev,noexec,relatime 0 0
configfs /sys/kernel/config configfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /tmp tmpfs rw,nosuid,relatime 0 0
/dev/loop2 /snap/core20/907 squashfs ro,nodev,relatime 0 0
/dev/loop1 /snap/bpytop/199 squashfs ro,nodev,relatime 0 0
/dev/loop3 /snap/core20/877 squashfs ro,nodev,relatime 0 0
/dev/mmcblk1p1 /media/mmcboot ext4 rw,noatime,nodiratime,errors=remount-ro,commit=600 0 0
/dev/mmcblk1p1 /boot ext4 rw,noatime,nodiratime,errors=remount-ro,commit=600 0 0
/dev/sdc /media/mirror1 btrfs rw,nosuid,nodev,noexec,noatime,nodiratime,compress-force=zstd:3,space_cache,subvolid=5,subvol=/ 0 0
/dev/sda1 /var/log.hdd ext4 rw,noatime,nodiratime,errors=remount-ro,commit=600 0 0
/dev/zram1 /var/log ext4 rw,relatime,discard 0 0
tmpfs /run/user/998 tmpfs rw,nosuid,nodev,relatime,size=389112k,mode=700,uid=998,gid=998 0 0
tracefs /sys/kernel/debug/tracing tracefs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /run/user/1000 tmpfs rw,nosuid,nodev,relatime,size=389112k,mode=700,uid=1000,gid=1000 0 0
/dev/loop6 /snap/snapd/10709 squashfs ro,nodev,relatime 0 0
/dev/loop4 /snap/snapd/11043 squashfs ro,nodev,relatime 0 0
/dev/loop5 /snap/bpytop/203 squashfs ro,nodev,relatime 0 0
binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,nosuid,nodev,noexec,relatime 0 0Everything should be pretty standard, except I increased the zram log partition size because I have a server running which generates quite a lot of log data over time.
-
6 hours ago, gprovost said:
Humm that's funny. You can call directly the sub app that is in charge of installing / updating bootloader.
$> nand-sata-install
You should use option 5.
Unfortunately, that doesn't work either - and may provide a hint as to why there's no option to install from armbian-config. If I run nand-sata-install, I get a message reading "This tool must be run from SD-card!" - which, of course, it's not, it's being run from the SATA SSD.
Is there a way to update the bootloader without taking the system down, booting from an SD card, updating the bootloader, removing the SD card, and rebooting back into the installed operating system again? Because that's a bit of a slog, especially if the bootloader's going to need regular updates...
-
My Helios64 has been booting from an M.2 drive since I got it, and while I've been doing regular apt-upgrades I've been ignoring the bootloader. This post suggests that could be a problem, so I figured I'd upgrade the bootloader through armbian-config. Trouble is, I can't.
I can load armbian-config without error, but when I go into System there's no Install option any more. All I have are Freeze, Nightly, Lowlevel, Bootenv, CPU, Avahi, Hardware, Other, SSH, Firmware, ZSH, Default.
Is there another way to ensure the bootloader is up-to-date?
-
1 hour ago, Koen Vervloesem said:
Does anyone have the same experience of this low encrypted ZFS performance on the Helios64? Is this because of the encryption? Looking at the output of top during the benchmark, the CPU seems to be taxed much, while on the HPE machine CPU usage is much less.
Your MicroServer has either an Opteron or Ryzen processor in it, either one of which is considerably more powerful than the Arm-based RK3399.
As a quick test, I ran OpenSSL benchmarks for AES-256-CBC on my Ryzen 2700X desktop, an older N54L MicroServer, and the Helios64, block size 8129 bytes.
Helios64: 68411.39kB/s.
N54L: 127620.44kB/s.
Desktop: 211711.31kB/s.
From that, you can see the Helios64 CPU is your bottleneck: 68,411.39kB/s is about 67MB/s, or within shouting distance of your 62MB/s real-world throughput - and that's just encryption, without the LZ4 compression overhead.
-
1 hour ago, Eric Poscher-Mika said:
What kind of private data could a drive UUID disclose?
The UUID itself is a universally unique identifier - that's what UUID means, after all. There are a wide range of scenarios where public knowledge of the UUID could be a problem, all absolutely vanishingly unlikely - think things like "state-level actors falsifying evidence about what data they found on a system and using their knowledge of the drive's UUID as 'proof' that the evidence was legitimately collected instead of just made up from whole cloth."
Given it takes a whopping four seconds to elide the UUID, and absence of the UUID has zero impact on diagnosing the problem, why wouldn't I keep it private?
Back on topic: I still haven't rebooted since the update. Would I be safe to do so, or shall I keep on truckin' until we're closer to figuring out the root cause of the issue?
-
Check your logs for host resets:
dmesg -T | grep DRDY
If you're seeing those, then you've likely got this issue.
-
Thanks, @FloBaoti - my post-upgrade one looks like this:
verbosity=1 bootlogo=false overlay_prefix=rockchip rootdev=UUID=a213f2de-xxxxx-xxxxx-xxxxx-xxxxxxxxxx rootfstype=ext4 usbstoragequirks=0x2537:0x1066:u,0x2537:0x1068:u
The drive UUID, elided for privacy, refers to /dev/sda1 - which is the SATA SSD I've got the operating system installed on, with /dev/mmcblk1p1 being on a UUID starting b72a980c. In other words, everything looks fine - though I'm still hanging fire on a reboot to check that fact!
-
Could someone share a before/after of /boot/armbianEnv.txt? I've upgraded but not yet rebooted, and while my armbianEnv.txt *looks* normal I'm loath to reboot until I'm sure it *is* normal!
-
My Helios64 ran a btrfs scrub on the 1st of the month, and that triggered a whole bunch more READ FPDMA QUEUED errors - 41 over the space of two hours, clustered in two big groups at around 57 minutes into the scrub and 1h46m into the scrub. Despite the errors, the scrub completed successfully without reporting any read errors.
-
I'm seeing the same on my Helios64, on ata2.00 - a bunch on Monday January 4th, then even more on Thursday January 14th. None since.
ata2.00 is a Western Digital Red Plus 6TB, purchased new for the Helios64. It's in a mirror with a 6TB Seagate IronWolf; the only other drive in the system is a 240GB M.2 SSD on the mainboard. The drive has completed SMART tests without errors, and the only thing of interest in the SMART attributes is 199 UDMA CRC Error Count at 5. The hard drives are in bays 2 & 4; 1, 3, and 5 are empty.
Board serial number is 000100001123, running Ubuntu 20.04.1 LTS 5.9.14-rockchip64.
-
Received my Helios64 and installed Armbian Focal. All seems well for me, except:
$ lscpu Architecture: aarch64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 6 On-line CPU(s) list: 0-5 Thread(s) per core: 1 Core(s) per socket: 3 Socket(s): 2 NUMA node(s): 1 Vendor ID: ARM Model: 4 Model name: Cortex-A53 Stepping: r0p4 CPU max MHz: 1800.0000 CPU min MHz: 408.0000 BogoMIPS: 48.00 NUMA node0 CPU(s): 0-5 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Spec store bypass: Vulnerable Vulnerability Spectre v1: Mitigation; __user pointer sanitization Vulnerability Spectre v2: Vulnerable Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
Are there mitigations in the works for those speculative store bypass and SPECTRE v2 vulnerabilities?
Also, is there any way to monitor the UPS battery's charge status? I've found gpio-charger/status, which gives me "Not charging" when it's fully topped up and "Charging" when it's charging or (confusingly) discharging, but I can't find anything else of use in there.
Upgrading to Bullseye (troubleshooting Armbian 21.08.1)
in Rockchip
Posted
As I can see, which is why I am making sure to move away from Armbian - and I recommend anyone reading this to do the same. It's simply not a project you can rely upon for daily use. For messing around with as a hobbyist, sure, why not. But to actually rely upon? Impossible.
Especially when the founder doesn't understand that bug reports - especially bug reports from beta/nightly users, which have been specifically requested right here in this thread - have value to the project. With an attitude like that, Armbian will never become anything more than a hobbyist's curiosity.
I'll repeat for clarity, so others reading this thread don't misunderstand my meaning: this is going to keep happening. Future updates will continue to break the Helios64, and other devices, and there is nothing you can do about it. Armbian, as Igor says, simply doesn't have the resources to properly test its updates before release, much less actually respond to bug reports. You will not be able to rely on it to keep your Helios64 running and your data safe. If you're happy with that, by all means continue to use it; otherwise, I'd advise looking into alternative software and/or moving to a new NAS altogether.