ShadowDance

  • Content Count

    54
  • Joined

1 Follower

About ShadowDance

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thanks for the updated firmware. Unfortunately, I'm seeing the same as Wofferl: At first the programming didn't seem to work (I used balenaEtcher without unpacking the file, it seemed to understand that it was compressed though so went ahead with it), there was no reboot either. Here's the output: Then I tried flashing it again, unpacked it first this time around and the flashing worked as described:
  2. Finally got around to testing my Helios64 on the grounded wall outlet (alone), unfortunately there was no change. The disks behaved the same as before and I also had those kernel panics during boot that I seem to get ~3 out of 5 bootups. Yes sounds very reasonable, and this is my expectation too. I didn't put too much weight behind what I read, but over there one user had issues with his disks until he grounded the cases of the HDDs. Sounded strange but at this point I'm open to pretty much any strange solutions, haha. Could've just been poorly designed/manufactured HDDs
  3. Thanks for sharing, and it seems you are right. I've read some confusing information back and forth and to be honest was never really sure whether mine were SMR or CMR. Doesn't help that WD introduced the Plus-series and then claims all plain Red (1-6TB) are SMR. Good to know they're CMR and makes sense since I haven't noticed performance issues with them.
  4. That's great news, looking forward to it! Also nice! If you do figure out a trick we can implement ourselves (i.e. via soldering or, dare I say, foil wrapping) to reduce the noise there, let us know . Speaking of noise, I have two new thoughts: My Helios64 PSU is not connected to a grounded wall-outlet. Unfortunately here in my apartment there are only two such outlets in very inconvenient places, but I'll see if it's feasible to test it out on that one. Perhaps this introduces noise into the system? Perhaps other devices are leaking their noise via gr
  5. Are you sure about that? Generally CMR is considered good, SMR is what you'd want to stay away from. Is it something related to these specific drives?
  6. @Wofferl those are the exact same model as three of my disks (but mine aren't "Plus"). I've used these disks in another machine with ZFS and zero issues (ASM1062 SATA controller). So if we assume the problem is between SATA controller and disk, and while I agree with you that it's probably in part a disk issue, I'm convinced it's something that would be fixable on the SATA controller firmware. Perhaps these disks do something funny that the SATA controller doesn't expect? And based on all my testing so far, the SATA cable also plays a role, meaning perhaps there's a noise-factor in play (as we
  7. I have some new things to report. I finally got around to replacing all SATA cables with 3rd party cables and using the power-only harness that the team at Kobol sent me. So I went ahead and re-enabled 6 Gbps but to my disappointment I ran into failed command: READ FPDMA QUEUED errors again. Because of this I tried once again to disable NCQ (extraargs=libata.force=noncq) and lo-and-behold, I didn't run into disk issues for the 1+ week I was testing it (kernel 5.9.14). This made me hopeful that maybe, just maybe, my previous test with NCQ disabled was bad so I re-installed the new s
  8. @scottf007 I think it would be hard for anyone here to really answer if it's worth it or not [for you]. In your situation, I'd try to evaluate whether or not you need the features that ZFS give you. For instance, ZFS snapshots is something you never really need, until you do. When you find that you've deleted some data a month ago and can still recover it from a snapshot, it's a great comfort. If that's something you value, btrfs could be an alternative and is already built into the kernel. If all you need is data integrity, you could consider dm-integrity+mdraid and file system of choice on t
  9. I'm experiencing a lot of issues on 5.10.x and thought a dedicated thread would make sense. I've posted some reports in other threads too. Note that 5.9.x is much more stable for me, but not without problems. I'm also experiencing what I can only refer to as boot stalls, i.e. nothing happens. Occasionally when these stalls happen there is a rcu: INFO: rcu_preempt detected stalls on CPUs/tasks trace printed. This happens on both 5.9 and 5.10. Looking at where it stalls it seems to be around the time the hardware optimizations are applied and/or graphics modules are loaded. I have cu
  10. @antsu this may not at all be related to your issue, but are you using swap on raw disk or on top of ZFS by any chance? If it's the latter, I would advice against it. There are some problems with doing that that are exacerbated on low-memory systems. See https://github.com/openzfs/zfs/issues/7734 for more info.
  11. Boot-time panic on 5.10.21-rockchip64 (21.02.3): Unable to handle kernel write to read-only memory at virtual address ffff8000100d3a54
  12. @antsu you could try changing the IO scheduler for those ZFS disks (to `none`) and see if it helps, wrote about it here:
  13. @jbergler I recently noticed the armbian-hardware-optimization script for Helios64 changes the IO scheduler to `bfq` for spinning disks, however, for ZFS we should be using `none` because it has it's own scheduler. Normally ZFS would change the scheduler itself, but that would only happen if you're using raw disks (not partitions) and if you import the zpool _after_ the hardware optimization script has run. You can try changing it (e.g. `echo none >/sys/block/sda/queue/scheduler`) for each ZFS disk and see if anything changes. I still haven't figured out if this is a cause for a
  14. Not a panic per-se, but system became unresponsive (unrecoverable) after a few minutes (ondemand governor) on 21.02.3: The relevant part (for my issue) here seems to be "rcu: INFO: rcu_preempt detected stalls on CPUs/tasks". This has happened a lot on every 5.10-kernel, previously I only used performance governors but now it obviously happened on ondemand too. Upon reboot I could not even reach full system boot-up before it stalled, so I'm now back on 5.9.14 which only panics once every ~1-2 weeks but is otherwise stable.
  15. @gprovost regarding compression there shouldn't be any RAM constraints that need to be considered, ZFS compression operates in recordsize'd chunks (i.e. between 4KiB and 1MiB with ashift=12). Personally I think it's a good idea to set LZ4 as the default compression for the entire pool and then adjust on a per-dataset basis where needed. LZ4 is very cheap on CPU performance and can give some easy savings (see below). I would not advice to use Gzip as the CPU overhead is quite significant, if higher compression is required, OpenZFS 2.0+ with Zstandard (zstd) compression might be a better alterna