Jump to content


  • Posts

  • Joined

  • Last visited

Posts posted by 0utc45t

  1. 5 minutes ago, Mark Dixon said:

    Is that the top two trays (ie SATA 4 and 5?). That's the same issue I'm seeing I think.


    Top? Hard to say which side you are considering as top... ;-) But no, I have issues with HDD 1 & 2 according to LED labels, two leftmost drive slots, most away from LEDs.

  2. I've been having HDD/SATA issues with my zfs pool, as have many others. Now my HDDs will not spin up or be recognized when on slot 1 or 2. What's wrong, how to check and what to check? Any help appreciated.

  3. My Helios froze today, no contact with SSH nor USB serial console. Blue heartbeat was off and red led was on, can't remember if it pulsed/blinked... Had to reboot. Now my system has been resilvering since (3 disk zfs mirror pool) with 2 of the disks being resilvered. I'll copy zfs status output here:


    root@helios64:~# zpool status
      pool: export
     state: DEGRADED
    status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
    action: Wait for the resilver to complete.
      scan: resilver in progress since Fri Jun  4 14:32:34 2021
        1.52T scanned at 161M/s, 1.34T issued at 143M/s, 1.85T total
        207M resilvered, 72.47% done, 01:02:29 to go

        NAME                                          STATE     READ WRITE CKSUM
        export                                        DEGRADED     0     0     0
          mirror-0                                    DEGRADED     0     0     0
            ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N4RP989R  UNAVAIL      5   591     0  (resilvering)
            ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N4TH65ZV  FAULTED      4   455     0  too many errors  (resilvering)
            wwn-0x5000c500c7771926                    ONLINE       0     0     0

    errors: No known data errors

    Those 2 disks that are resilvering are new disks, the last one is older. 


    Do we have a solution of some sort? I've just requested the HDD wires via email...


  4. In case of disk failure... just copying the filesystem to a backup, which then is copied to a new disk, which is inserted to slot A and booted. It will not boot. I presume so, by the fact that I tried to repartition/resize that disk holding system in slot A and it failed to boot after that. So, there is something more involved, hints are visible by looking at mounts. I would like to get the system backupped. Know the needed bits and tweaks, so that I know that I can do it and it will work, in case of disk failure.

  5. I'm too suffering from this issue. Sata SSD in slot1 (with system in it) and ZFS pool (mirror) in the rest. Started with 2 older disks... scrub initiated problems, added one new disk to get redundancy back... started to get errors on that too, added second new Ironwolf and now my pool is resilvering with 1 old WD disk and 1 new Seagate in degraded state (too many errors). Waiting for the resilvering to finish on the second new Seagate Ironwolf to have my data mirrored...


    Any real solution yet? 

  6. Hi, 


    I would like to make backup(s) of my system. The system (debian) is installed on sata SSD @ slot 1. I would like to have the backup of the system written to zfs pool on the other slots as an image. Which in case of system SSD failure, I could write back to a new SSD installed on the same slot 1, by booting to a rescue system from micro sd card.


    Have this kind of functionality already been implemented? Or parts of it? Linux has a bunch of backup solutions, but I suspect they might not work with the helios64 box (sd card, internal ssd). And my only helios64 box is in "production" use, so "trial and error" method is not the first option in my mind...

  7. I'm trying to get desktop showing thru USB-C connector with appropriate usb-c to hdmi cable. I don't find errors in Xorg.0.log, but still I get no signal to my TV. I've installed desktop via armbian-config. Never done USB-C to any display, so I think I need some pointers to debug and get it to work... Did not find howtos or alike.

  8. @SIGSEGV I did compile zfs 2.0.0 and its up and running.  Rsynced 1.4TB to Helios box after that. Seems to be working. Next, home assistant, after that's up and running I'll get rid of my ATX computer that has been doing those jobs...

  9. I actually managed to get the zfs working, by following the post by  @grek in this very thread. I've been running some fio tests to see if everything is dandy. Seems to be. :-) 

    root@helios64:/ironwolf # zpool --version
    root@helios64:/ironwolf # zfs version
    root@helios64:/ironwolf #


    I've noticed that zfs 2.0.0 has been released, so that means that I'll be compiling it after the fio runs are done (they've been running for a day or two now...) . 

  10. I'm new here, joined to say that zfs support for Helios64 is very needed. Have tried to get it working, but failing miserably. Need help from more knowledgeable people, so I'll be following this thread.

  • Create New...