Jump to content

ShadowDance

Members
  • Posts

    64
  • Joined

Everything posted by ShadowDance

  1. @RockBian it wasn’t my intention to color the issue black and white. My statement was based on that most user accounts have been from users of WD REDs. I personally believe this issue is far more prevalent than made out in this thread. It’s simply that triggering the issue may require time and the right kind of workload. So either people don’t trigger the issue or they may not even notice it unless they’re checking system logs. I hope your new disks work out for you. Either way, I never tried any other disks in my Helios and probably never will considering there’s no further development and Armbian used to break seemingly every other day (a while back).
  2. @meymarce it’s most likely related, and not due to bad disks. All my disks have received a bunch of UDMA_CRC_Error_Count due to the Helios. I’ve since completely stopped using the Helios and put the same disks inside an ASUSTOR. Zero issues. Basically don’t put WD REDs in a Helios, or be prepared to have a bad time, heh.
  3. Same here, the instability of my Helios64 combined with Armbian not having a test-suite for it (and thus breaking it at any point) lead me to splurge on hardware that cost 4x as much. A NAS should be out of sight and out of mind, not a constant source of worry.
  4. @RockBian hmm, that output look more like there's a problem with the disk itself. At least none of the SATA errors I've experienced have been logged by the disks themselves, but could be difference in manufacturers. I'd recommend running a short and a long scan (via smartctl) on the disk.
  5. It does look like the same issue to me and filesystem does not matter, these SATA errors can present themselves simply by reading the disk. And as a result, filesystem corruption is not unexpected, ZFS does protect us from it though. But it can’t be discounted that this could be a disk error as well, perhaps it’s nearing its end-of-life. Which brand / model of disk do you have?
  6. Changing CPU voltage didn't help either. However I might have finally found a pattern to the pool suspension. During two observations it seems to have happened during snapshot replication (from one machine to helios) via zrepl. Not sure if it's during a recv, destroy or hold/release, so I'm now doing more verbose logging of the zfs events in hopes that that will provide some clues. Edit: Could be just normal snapshot/delete too, will be able to verify next time it happens.
  7. Thanks for the hint, I was under the impression that VDD changes were no longer necessary so I haven't even considered them. But I think I'll try just that, it's a stab in the dark but does align with my suspicion about the CPU, and at this point I'll try just about anything.
  8. Thanks for replying and digging into the sources @meymarce, I'll try to comb over that code path and see if I can find any clues as to what's going on. This happens with both SATA firmwares (updated and original), currently running original. And I'm running Buster as well (and my drives are 5 x WD RED 6TB). Did you raise voltage to prevent SATA errors/resets? How did you go about doing that if I may ask? The issue doesn't require a lot of load as I've observed, it happens seemingly randomly (as can be seen in the following graphs for the last 7 days, "gaps" indicate a panic + time before I've rebooted).
  9. Hey, posting this in hopes that someone might have an ideas as to why this is happening. I've been dealing with an issue with my ZFS pool for a while now where the pool gets suspended but there are no other error indicators. WARNING: Pool 'rpool' has encountered an uncorrectable I/O failure and has been suspended. I'm using ZFS on top of LUKS and used to have problems with my drives resetting due to SATA errors, but I haven't really seen that issue since I started using my own SATA cables and limiting SATA speed to 3 Gbps. My working theory for last week has been that it's a problem with the CPU, perhaps some handover between big.LITTLE. So I've tried changing ZFS module options to pin workers to CPU cores, and I've also tried dm-crypt options that do this, but nothing has helped. So either the theory was wrong, or the tweaks did not change the faulty behavior. I also tried disabling the little cores, but the machine refused to boot as a result. With anywhere from two pool stalls per day to one per week, I'm pretty much at my wits end and ready to throw in the towel with my Helios 64. In addition, I still have random kernel stalls/panics originating from rcu or null pointer dereferences (on boot, usually). I'm not really interested in learning to debug the Linux kernel so I might just throw money at the problem and retire the Helios unless someone has a solution for this. I do love the idea of open source hardware and wish the best success for Kobol and the Helios, but I wasn't quite ready to commit to these many problems. I've also tried to set the pool failure mode to panic (zpool set failmode=panic rpool) but it provides no useful output as far as I can tell: [22978.488772] Kernel panic - not syncing: Pool 'rpool' has encountered an uncorrectable I/O failure and the failure mode property for this pool is set to panic. [22978.490035] CPU: 1 PID: 1429 Comm: z_null_int Tainted: P OE 5.9.14-rockchip64 #20.11.4 [22978.490833] Hardware name: Helios64 (DT) [22978.491182] Call trace: [22978.491416] dump_backtrace+0x0/0x200 [22978.491743] show_stack+0x18/0x28 [22978.492041] dump_stack+0xc0/0x11c [22978.492346] panic+0x164/0x364 [22978.492962] zio_suspend+0x148/0x150 [zfs] [22978.493678] zio_done+0xbd0/0xec0 [zfs] [22978.494387] zio_execute+0xac/0x150 [zfs] [22978.494783] taskq_thread+0x278/0x460 [spl] [22978.495161] kthread+0x140/0x150 [22978.495453] ret_from_fork+0x10/0x34 [22978.495778] SMP: stopping secondary CPUs [22978.496134] Kernel Offset: disabled [22978.496443] CPU features: 0x0240022,2000200c [22978.496817] Memory Limit: none [22978.497098] ---[ end Kernel panic - not syncing: Pool 'rpool' has encountered an uncorrectable I/O failure and the failure mode property for this pool is set to panic. ]--- It's not necessarily the Helios's fault either, this could very well be a bug in ZFS on ARM for all I know.
  10. @usefulnoise I'd start by trying all different suggestions in this thread, e.g. limiting speed, disabling ncq, if you're not using raw disks (i.e. partitions or dm-crypt), make sure you've disabled io schedulers on the disks, etc. Example: libata.force=3.0G,noncq,noncqtrim Disabling ncqtrim is probably unnecessary, but doesn't give any benefit with spinning disks anyway. If none of this helps, and you're sure the disks aren't actually faulty, I'd recommend trying the SATA controller firmware update (it didn't help me) or possibly experimenting with removing noise. Hook the PSU to a grounded wall socket, use 3rd party SATA cables, or try rerouting them. Possibly, if you're desperate, try removing the metal clips from the SATA cables (the clip that hooks into the motherboard socket), it shouldn't be a problem, but could perhaps function as an antenna for noise.
  11. Thanks for the updated firmware. Unfortunately, I'm seeing the same as Wofferl: At first the programming didn't seem to work (I used balenaEtcher without unpacking the file, it seemed to understand that it was compressed though so went ahead with it), there was no reboot either. Here's the output: Then I tried flashing it again, unpacked it first this time around and the flashing worked as described:
  12. Finally got around to testing my Helios64 on the grounded wall outlet (alone), unfortunately there was no change. The disks behaved the same as before and I also had those kernel panics during boot that I seem to get ~3 out of 5 bootups. Yes sounds very reasonable, and this is my expectation too. I didn't put too much weight behind what I read, but over there one user had issues with his disks until he grounded the cases of the HDDs. Sounded strange but at this point I'm open to pretty much any strange solutions, haha. Could've just been poorly designed/manufactured HDDs too.
  13. Thanks for sharing, and it seems you are right. I've read some confusing information back and forth and to be honest was never really sure whether mine were SMR or CMR. Doesn't help that WD introduced the Plus-series and then claims all plain Red (1-6TB) are SMR. Good to know they're CMR and makes sense since I haven't noticed performance issues with them.
  14. That's great news, looking forward to it! Also nice! If you do figure out a trick we can implement ourselves (i.e. via soldering or, dare I say, foil wrapping) to reduce the noise there, let us know . Speaking of noise, I have two new thoughts: My Helios64 PSU is not connected to a grounded wall-outlet. Unfortunately here in my apartment there are only two such outlets in very inconvenient places, but I'll see if it's feasible to test it out on that one. Perhaps this introduces noise into the system? Perhaps other devices are leaking their noise via grounded extension cord (I recently learned that using a grounded extension cord on a non-grounded outlet is not ideal, hadn't even thought to consider it before..) I read somewhere that the metal frame of the HDD cases should be connected to device ground, but since we're using plastic mount brackets I doubt this is the case? To be honest I don't know if this is a real issue or not, just something I read on a forum.
  15. Are you sure about that? Generally CMR is considered good, SMR is what you'd want to stay away from. Is it something related to these specific drives?
  16. @Wofferl those are the exact same model as three of my disks (but mine aren't "Plus"). I've used these disks in another machine with ZFS and zero issues (ASM1062 SATA controller). So if we assume the problem is between SATA controller and disk, and while I agree with you that it's probably in part a disk issue, I'm convinced it's something that would be fixable on the SATA controller firmware. Perhaps these disks do something funny that the SATA controller doesn't expect? And based on all my testing so far, the SATA cable also plays a role, meaning perhaps there's a noise-factor in play (as well). Side-note; Western Digital really screwed us over with this whole SMR fiasco, didn't they. I'd be pretty much ready to throw these disks in the trash if it wasn't for the fact that they worked perfectly on another SATA controller. @grek glad it helped! By the way, I would still recommend changing the io scheduler to none because bfq is CPU intensive, and ZFS does it's own scheduling. Probably wont fix issues but might reduce some CPU overhead.
  17. I have some new things to report. I finally got around to replacing all SATA cables with 3rd party cables and using the power-only harness that the team at Kobol sent me. So I went ahead and re-enabled 6 Gbps but to my disappointment I ran into failed command: READ FPDMA QUEUED errors again. Because of this I tried once again to disable NCQ (extraargs=libata.force=noncq) and lo-and-behold, I didn't run into disk issues for the 1+ week I was testing it (kernel 5.9.14). This made me hopeful that maybe, just maybe, my previous test with NCQ disabled was bad so I re-installed the new standard harness I received from Kobol but unfortunately, I started immediately having issues again. Note that this is with io scheduler set to none. ata4.00: failed command: READ DMA EXT @gprovost are you any closer to figuring out what could be wrong here or have a potential fix in the pipeline? @grek I would recommend adding extraargs=libata.force=noncq to /boot/armbianEnv.txt and see if it helps. Might not completely fix the issue but could make more stable.
  18. @scottf007 I think it would be hard for anyone here to really answer if it's worth it or not [for you]. In your situation, I'd try to evaluate whether or not you need the features that ZFS give you. For instance, ZFS snapshots is something you never really need, until you do. When you find that you've deleted some data a month ago and can still recover it from a snapshot, it's a great comfort. If that's something you value, btrfs could be an alternative and is already built into the kernel. If all you need is data integrity, you could consider dm-integrity+mdraid and file system of choice on top (EXT4, XFS, etc.). Skipping "raid" all-together would also be possible, LVM allows for great flexibility with disks. If you're worried about the amount of work you need to put in with ZFS, you can freeze the updates when you are satisfied with the stability of the system. Just hit `sudo apt-mark hold linux-image-current-rockchip64 linux-dtb-current-rockchip64` which prevents kernel/boot instruction updates and you should not have ZFS break on you any time soon. Conversely, `unhold` once you're ready to deal with the future. For me personally, ZFS is totally worth it. I have it on two server/NAS at home. I use ZFS native encryption on one, and LUKS+ZFS on the Helios64 (due to CPU capabilities). I also use a tool named zrepl for automatically creating, pruning and replicating snapshots. So for instance, my most important datasets are backed up from my one machine to the Helios64 in raw mode, this means the data is safe, but not readable by the Helios64 without loading the encryption keys. I also run Armbian on the Helios64 straight off of ZFS (root on ZFS), this gives me the ability to easily roll-back the system if, say, an update broke it. @hartraft depends on your requirements/feature wishlist. RAID (mdraid), for instance, cannot guarantee data consistency (unless stacked with dm-integrity). What this means is that once data is written to the disk, it can still become corrupted and RAID can't catch it. ZFS guards against this via checksums on all data, i.e. once it's on disk, it's guarantee-ably either not corrupted or that corruption will be detected and likely repairable from one of the redundant disks. ZFS also has support for snapshots, meaning you can easily recover deleted files from snapshots, etc. RAID does not support anything like this. Looking at mergerfs, it seems to lack these features as well, and it runs in user-space (via FUSE), so not as integrated. SnapRaid is a backup program so not really comparable and MooseFS I know nothing about, but looks enterprise-y. The closest match-up for ZFS in terms of features is probably btrfs (in kernel) or bcachefs (have never used this).
  19. I'm experiencing a lot of issues on 5.10.x and thought a dedicated thread would make sense. I've posted some reports in other threads too. Note that 5.9.x is much more stable for me, but not without problems. I'm also experiencing what I can only refer to as boot stalls, i.e. nothing happens. Occasionally when these stalls happen there is a rcu: INFO: rcu_preempt detected stalls on CPUs/tasks trace printed. This happens on both 5.9 and 5.10. Looking at where it stalls it seems to be around the time the hardware optimizations are applied and/or graphics modules are loaded. I have currently blocked the graphics modules from loading and have moved hardware optimizations to initrd so that they will be performed earlier. Time will tell if this has an effect or not. Distro: Armbian Buster 21.02.3 Kernel: 5.10.21 /proc/cmdline: root=ZFS=rpool/ROOT/debian rootwait rootfstype=zfs earlycon console=ttyS2,1500000 consoleblank=0 loglevel=7 ubootpart=794023b5-01 usb-storage.quirks=0x2537:0x1066:u,0x2537:0x1068:u libata.force=noncq earlyprintk ignore_loglevel cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory swapaccount=1 Filesystem: ZFS 2.0.3 (root on ZFS, boot from eMMC) LUKS: Yes, full disk encryption (HDD -> Partition -> LUKS -> ZFS) IO scheduler: None Panic during boot: Unable to handle kernel write to read-only memory at virtual address ffff8000100d3a54 Kernel bug (system left inoperable): Internal error: Oops - BUG: 0 [#1] PREEMPT SMP And the latest issue: WARNING: Pool 'rpool' has encountered an uncorrectable I/O failure and has been suspended. Note: The warning above was only printed the latter time, but the same symptom occurred two times within two days. I.e. everything is stuck in iowait and it's basically impossible to do anything with the system. This only happens on 5.10, and happened two times within two days. On 5.9 (ZFS 2.0.3 as well) there is no such issue. Anyway, I'm back on 5.9.14 for the time being. Just finished a full scrub of the ZFS filesystem as well, no issues or errors.
  20. @antsu this may not at all be related to your issue, but are you using swap on raw disk or on top of ZFS by any chance? If it's the latter, I would advice against it. There are some problems with doing that that are exacerbated on low-memory systems. See https://github.com/openzfs/zfs/issues/7734 for more info.
  21. Boot-time panic on 5.10.21-rockchip64 (21.02.3): Unable to handle kernel write to read-only memory at virtual address ffff8000100d3a54
  22. @antsu you could try changing the IO scheduler for those ZFS disks (to `none`) and see if it helps, wrote about it here:
  23. @jbergler I recently noticed the armbian-hardware-optimization script for Helios64 changes the IO scheduler to `bfq` for spinning disks, however, for ZFS we should be using `none` because it has it's own scheduler. Normally ZFS would change the scheduler itself, but that would only happen if you're using raw disks (not partitions) and if you import the zpool _after_ the hardware optimization script has run. You can try changing it (e.g. `echo none >/sys/block/sda/queue/scheduler`) for each ZFS disk and see if anything changes. I still haven't figured out if this is a cause for any problems, but it's worth a shot.
  24. Not a panic per-se, but system became unresponsive (unrecoverable) after a few minutes (ondemand governor) on 21.02.3: The relevant part (for my issue) here seems to be "rcu: INFO: rcu_preempt detected stalls on CPUs/tasks". This has happened a lot on every 5.10-kernel, previously I only used performance governors but now it obviously happened on ondemand too. Upon reboot I could not even reach full system boot-up before it stalled, so I'm now back on 5.9.14 which only panics once every ~1-2 weeks but is otherwise stable.
  25. @gprovost regarding compression there shouldn't be any RAM constraints that need to be considered, ZFS compression operates in recordsize'd chunks (i.e. between 4KiB and 1MiB with ashift=12). Personally I think it's a good idea to set LZ4 as the default compression for the entire pool and then adjust on a per-dataset basis where needed. LZ4 is very cheap on CPU performance and can give some easy savings (see below). I would not advice to use Gzip as the CPU overhead is quite significant, if higher compression is required, OpenZFS 2.0+ with Zstandard (zstd) compression might be a better alternative as it can achieve Gzip level compression at much lower cost of (de)compression. As @wurmfood pointed out, however, it's quite rare that media or encrypted data would benefit from compression, the only exception is when that data contains padding/zeroes, which would compress well under LZ4. So that's a good example of when not to use compression and also keep in mind that disabling compression will not decompress existing data, there could exist uncompressed, LZ4-compressed, Gzip-compressed, etc data mixed under one dataset. A full rewrite of all data is needed to change the compression of all the data. As for deduplication, unless the zpool is extremely small, dedup is not really an option on the Helios64 as it requires very large amounts of RAM. The man page recommends at least 1.25 GiB of RAM per 1 TiB of storage. For reference, some of my space savings from using compression: NAME USED COMPRESS RATIO rpool/ROOT/debian 15.5G lz4 1.73x rpool/data/service 18.7G lz4 2.08x rpool/data/service/avahi 208K lz4 1.00x rpool/data/service/grafana 48.4M lz4 1.83x rpool/data/service/loki 736K lz4 1.00x rpool/data/service/mariadb 676M lz4 3.25x rpool/data/service/mariadb/db 214M lz4 1.88x rpool/data/service/mariadb/dump 455M gzip 3.88x rpool/data/service/prometheus 10.9G lz4 2.29x rpool/data/service/promtail 768K lz4 1.00x rpool/data/service/samba 2.73M lz4 6.04x rpool/data/service/unifi 3.41G lz4 1.34x rpool/data/service/unifi/db 2.38M lz4 1.00x rpool/var/log 18.0G lz4 2.26x
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines