Jump to content

alexcp

Members
  • Posts

    11
  • Joined

  • Last visited

Posts posted by alexcp

  1. Hello,

     

    My Helios4's power brick died. What's a good replacement?

    I almost ordered a Mean Well GST120A-R7B but realized its Mini-DIN connector has a different pinout.

     

    Also, Helios4 is end-of-life. What does it practically mean in terms of software updates and support?

     

  2. Well, I have not been able to recover my data. Even though the RAID array was clean, the filesystem appeared damaged as you suspected. The curious 18Tb filesystem on a 32bit rig is no more, unfortunately; I cannot run any test on it anymore. The defective HDD was less than a year old and covered by a 3-year warranty, so it is on its way to the manufacturer; hopefully I will get a free replacement. I intend to keep the 4x 10Tb drives on my Intel rig and rebuild the Helios with smaller, cheaper HDDs. To me, the incident is a reminder that a RAID array is not a complete solution for data safety and must be supported by other means, e.g. cloud or tape backups.

     

    I don't remember how I got the 18Tb filesystem. I think I created a smaller one and then resized it up after deleting the encrypted partition, even though such resizing should be impossible according to your link above. Out of curiosity I just did the following: I assembled a RAID5 array from the remaining 3x 10Tb disks and tried to create a 18Tb filesystem on it via OMV. The result was the following error message:

    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; mkfs -V -t ext4 -b 4096 -m 0 -E lazy_itable_init=0,lazy_journal_init=0 -L 'public' '/dev/mapper/public-public' 2>&1' with exit code '1': mke2fs 1.43.4 (31-Jan-2017) Creating filesystem with 4883151872 4k blocks and 305197056 inodes Filesystem UUID: c731d438-7ccd-4d31-9277-c91b0ea62c72 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000 , 214990848 , 512000000 , 550731776 , 644972544 , 1934917632 , 2560000000 , 3855122432 Allocating group tables: 0/149022 12986/149022 done Writing inode tables: 0/149022 151/149022 332/149022  576/149022 792/149022 1016/149022 1230/149022 1436/149022 1673/149022 1881/149022  2044/149022 2265/149022 2427/149022 2650/149022  2839/149022 3056/149022 3265/149022 3479/149022 3692/149022 3904/149022 4099/149022 4281/149022  4473/149022  4677/149022 4903/149022 5116/149022  5316/149022 5510/149022  5731/149022  5944/149022 6124/149022  6326/149022 6532/149022  6727/149022 6911/149022 7139/149022  7363/149022  7559/149022 7762/149022 7988/149022  8165/149022 8389/149022  8542/149022  8771/149022  8956/149022 9176/149022  9412/149022  9584/149022  9821/149022  10044/149022  10229/149022 10451/149022  10604/149022 10848/149022  11007/149022 11247/149022  11458/149022 11645/149022  11863/149022  12089/149022  12222/149022 12448/149022 12599/149022  12851/149022 13067/149022  13285/149022 13498/149022  13678/149022  13910/149022  14134/149022 14310/149022  14532/149022 14720/149022  14883/149022 15105/149022  15331/149022 15529/149022  15721/149022  15938/149022 16145/149022 16331/149022 16531/149022  16700/149022  16896/149022 17112/149022 17349/149022  17556/149022  17740/149022  17961/149022  18441/149022  19142/149022  20066/149022 20833/149022  21761/149022  22691/149022 23678/149022  24603/149022  25474/149022 26450/149022 27313/149022 28040/149022  28955/149022  29718/149022  30651/149022  31634/149022  32616/149022  33546/149022  34431/149022  35428/149022 36336/149022  37006/149022  37914/149022  38756/149022 39715/149022  40618/149022 41524/149022  42513/149022  43327/149022  44276/149022  45244/149022 45931/149022 46805/149022  47615/149022 48607/149022 49482/149022  50439/149022  51409/149022  52248/149022  53178/149022  54142/149022  54852/149022  55694/149022  56474/149022 57439/149022 58356/149022  59295/149022 60273/149022  61095/149022  62055/149022 63023/149022 63800/149022  64632/149022  65361/149022  66349/149022  67240/149022  68126/149022  69117/149022 69982/149022  70889/149022 71863/149022  72685/149022  73535/149022  74303/149022  75243/149022  76133/149022 77117/149022  78083/149022  78898/149022  79887/149022 80862/149022  81528/149022  82420/149022  83224/149022  84208/149022  85147/149022  86074/149022  87048/149022 87950/149022  88935/149022 89841/149022  90647/149022  91464/149022  92347/149022 93320/149022  94198/149022  95183/149022 96113/149022 96948/149022  97885/149022  98828/149022  99538/149022  100419/149022  101178/149022  102152/149022 103077/149022  104004/149022  104953/149022 105853/149022  106786/149022  107665/149022  108421/149022  109334/149022  110102/149022 111051/149022  111981/149022  112963/149022  113927/149022  114782/149022  115741/149022  116710/149022 117368/149022  118196/149022  119060/149022  120021/149022  120918/149022 121906/149022  122812/149022  123652/149022  124574/149022 125546/149022  126288/149022  127130/149022  127908/149022 128888/149022  129857/149022  130758/149022 131336/149022 131526/149022  131717/149022  131922/149022  132099/149022  132279/149022  132469/149022  132724/149022 132915/149022  133151/149022  133373/149022 133567/149022  133815/149022 134032/149022  134186/149022 134382/149022 134568/149022  134776/149022  134998/149022 135214/149022  135427/149022 135636/149022  135852/149022 136072/149022  136277/149022  136465/149022 136595/149022 136797/149022  136988/149022  137231/149022 137440/149022 137635/149022  137803/149022 138004/149022 138229/149022 138384/149022  138603/149022  138809/149022 139012/149022 139242/149022  139443/149022 139634/149022  139847/149022  140057/149022  140247/149022 140449/149022  140595/149022  140794/149022 141025/149022 141250/149022  141431/149022  141631/149022  141807/149022 142042/149022  142246/149022 142446/149022 142635/149022  142836/149022  143039/149022  143247/149022 143456/149022  143677/149022 143897/149022  144102/149022 144330/149022 144498/149022  144662/149022 144919/149022  145109/149022 145338/149022 145546/149022  145739/149022 145968/149022  146173/149022  146367/149022 146529/149022 146723/149022 146922/149022 147121/149022 147331/149022  147516/149022  147711/149022 147910/149022  148119/149022 148296/149022  148483/149022  148675/149022 148884/149022   done Creating journal (262144 blocks): mkfs.ext4: Attempt to read block from filesystem resulted in short read while trying to create journal

    In the end, the filesystem was not created, but the error diagnostics above is not what I expected. I remember OMV3 told me there is a limit on the size of filesystem; OMV4 did not. 

     

    Perhaps there was (is?) a hole somewhere in filesystem tools that allowed me to do a stupid thing and create a filesystem that was unsafe to use on a 32-bit system. "Short read" (see at the very end of the message above) was also the predominant mode of failure on the previous filesystem. Even so, whatever garbage I had on the RAID array should not have resulted in segmentation faults when trying to read files from the filesystem.

  3. The point about the 16Tb limit is an interesting one; I remember being unable to create, via OMV, the array that would take all available physical space, and had to settle to the maximum offered by OMV. I also remember that ext4 is not limited by 16Tb; the limit is in the tools.

     

    Here is lvdisplay:

    $ sudo lvdisplay
    [sudo] password for alexcp: 
      --- Logical volume ---
      LV Path                /dev/omv/public
      LV Name                public
      VG Name                omv
      LV UUID                xGyIgi-U00p-MVVv-zlz8-0quc-ZJwh-tuWvRl
      LV Write Access        read/write
      LV Creation host, time helios4, 2018-02-10 03:48:42 +0000
      LV Status              available
      # open                 0
      LV Size                18.19 TiB
      Current LE             4768703
      Segments               1
      Allocation             inherit
      Read ahead sectors     auto
      - currently set to     4096
      Block device           254:0

     

  4. cp fails, as does SMB network access to the shared folders. A fresh Debian Stretch install behaves identically to the not-so-fresh, and the previously installed OMV3 on Debian Jessie, the SD card with which I still have around, shows the same "Internal error: Oops: 5 [#1] SMP THUMB2".

     

    At this point, I tend to believe this is a hardware issue of sorts, maybe something as simple as a faulty power brick. Too bad it's not the SPI NOR flash, the solution to which is known. Oh well. Over the next few days, I will assemble another rig and will try to salvage the data through it.

     

    @gprovost: thank you for your helping out with this issue!

  5. No luck with rsync. With the faulty HDD physically disconnected, an attempt to rsync the array to either a local USB drive or a hard drive on another machine invariably ends up in segmentation fault and system crash as before. 

     

    Would I be able to access the filesystem on the RAID if I connect the HDDs to an Intel-based machine running Debian and OMV? I have a little Windows desktop with four SATA ports. I should be able to set up Debian and OMV on a USB stick and use the SATA for the array.

  6. The array's re-building was completed overnight, see below. I will try rsync later today to see if the data can be copied.

    $ sudo mdadm -D /dev/md127
    [sudo] password for alexcp: 
    /dev/md127:
            Version : 1.2
      Creation Time : Sun Feb  4 18:42:03 2018
         Raid Level : raid10
         Array Size : 19532611584 (18627.75 GiB 20001.39 GB)
      Used Dev Size : 9766305792 (9313.88 GiB 10000.70 GB)
       Raid Devices : 4
      Total Devices : 3
        Persistence : Superblock is persistent
    
      Intent Bitmap : Internal
    
        Update Time : Fri Nov  9 05:34:14 2018
              State : clean, degraded 
     Active Devices : 3
    Working Devices : 3
     Failed Devices : 0
      Spare Devices : 0
    
             Layout : near=2
         Chunk Size : 512K
    
               Name : helios4:OMV  (local to host helios4)
               UUID : 16d26e7c:3c2aeef9:ec7cdf93:ca0fbfa5
             Events : 32902
    
        Number   Major   Minor   RaidDevice State
           0       8       16        0      active sync set-A   /dev/sdb
           1       8       32        1      active sync set-B   /dev/sdc
           -       0        0        2      removed
           3       8       64        3      active sync set-B   /dev/sde
    $ cat /proc/mdstat
    Personalities : [raid10] [raid0] [raid1] [raid6] [raid5] [raid4] 
    md127 : active raid10 sdb[0] sde[3] sdc[1]
          19532611584 blocks super 1.2 512K chunks 2 near-copies [4/3] [UU_U]
          bitmap: 21/146 pages [84KB], 65536KB chunk

     

  7. Re-add worked. Note sda is now the USB drive, so what was sda before is now sdb, etc.

    $ sudo mdadm --manage /dev/md127 --re-add /dev/sdb
    mdadm: re-added /dev/sdb
    $ sudo mdadm -D /dev/md127
    /dev/md127:
            Version : 1.2
      Creation Time : Sun Feb  4 18:42:03 2018
         Raid Level : raid10
         Array Size : 19532611584 (18627.75 GiB 20001.39 GB)
      Used Dev Size : 9766305792 (9313.88 GiB 10000.70 GB)
       Raid Devices : 4
      Total Devices : 3
        Persistence : Superblock is persistent
    
      Intent Bitmap : Internal
    
        Update Time : Fri Nov  9 04:59:43 2018
              State : clean, degraded, recovering 
     Active Devices : 2
    Working Devices : 3
     Failed Devices : 0
      Spare Devices : 1
    
             Layout : near=2
         Chunk Size : 512K
    
     Rebuild Status : 0% complete
    
               Name : helios4:OMV  (local to host helios4)
               UUID : 16d26e7c:3c2aeef9:ec7cdf93:ca0fbfa5
             Events : 32891
    
        Number   Major   Minor   RaidDevice State
           0       8       16        0      spare rebuilding   /dev/sdb
           1       8       32        1      active sync set-B   /dev/sdc
           -       0        0        2      removed
           3       8       64        3      active sync set-B   /dev/sde

     

  8. Lacking a spare SATA cable, I swapped sda and sdc cables. dmesg still shows errors for sdc, so it must be a faulty HDD - a first for me, ever. 

     

    mdadm -D /dev/md127 gives the following:

    Quote

     

    /dev/md127:

            Version : 1.2

      Creation Time : Sun Feb  4 18:42:03 2018

         Raid Level : raid10

         Array Size : 19532611584 (18627.75 GiB 20001.39 GB)

      Used Dev Size : 9766305792 (9313.88 GiB 10000.70 GB)

       Raid Devices : 4

      Total Devices : 2

        Persistence : Superblock is persistent

     

      Intent Bitmap : Internal

     

        Update Time : Wed Nov  7 16:04:18 2018

              State : clean, degraded

    Active Devices : 2

    Working Devices : 2

    Failed Devices : 0

      Spare Devices : 0

     

             Layout : near=2

         Chunk Size : 512K

     

               Name : helios4:OMV  (local to host helios4)

               UUID : 16d26e7c:3c2aeef9:ec7cdf93:ca0fbfa5

             Events : 32849

     

        Number   Major   Minor   RaidDevice State

           -       0        0        0      removed

           1       8       16        1      active sync set-B   /dev/sdb

           -       0        0        2      removed

           3       8       48        3      active sync set-B   /dev/sdd

     

     

    I created the encrypted partition when I was originally setting up the Helios - it was part of the setting up instructions - but I never used it. 

     

    I PMed to you the logs.

     

    Also, I disconnected sdc and tried to rsync the RAID array to a locally connected USB drive as before. After copying a bunch of files, I got the following; this the sort of messages I was getting before:

    Quote

     

    Segmentation fault

    Message from syslogd@localhost at Nov  9 01:53:13 ...

    kernel:[  504.340711] Internal error: Oops: 5 [#1] SMP THUMB2

     

    Message from syslogd@localhost at Nov  9 01:53:13 ...

    kernel:[  504.438500] Process rsync (pid: 3050, stack limit = 0xed1da220)

     

    Message from syslogd@localhost at Nov  9 01:53:13 ...

    kernel:[  504.444431] Stack: (0xed1dbd50 to 0xed1dc000)

     

    Message from syslogd@localhost at Nov  9 01:53:13 ...

    kernel:[  504.448797] bd40:                                     ed797a28 ed1dbdc4 c0713078 0000164a

     

    Message from syslogd@localhost at Nov  9 01:53:13 ...

    kernel:[  504.456994] bd60: ed797a28 c0280791 e94dc4c0 ed797a28 ed1dbdc4 c0248ddf ed1dbdc4 ed1dbdc4

     

    Message from syslogd@localhost at Nov  9 01:53:13 ...

    kernel:[  504.465190] bd80: ed797a28 e8f40920 ed1dbdc4 00000000 ed797a28 c025c043 95b4ce29 c0a03f88

     

    Message from syslogd@localhost at Nov  9 01:53:13 ...

    kernel:[  504.473387] bda0: e8f40920 ed1dbdc4 ed186000 c025c1a1 00000000 01400040 c0a7e3c4 00000001

     

    Message from syslogd@localhost at Nov  9 01:53:13 ...

    kernel:[  504.481584] bdc0: 00000000 e94dc4c0 00000000 00010160 00000001 00001702 ed1dbf08 95b4ce29

     

    Message from syslogd@localhost at Nov  9 01:53:13 ...

    kernel:[  504.489780] bde0: eccae840 e8f40920 ed797a28 ed1dbe5c 00000000 ed1dbf08 e8f40a14 eccae840

     

    Message from syslogd@localhost at Nov  9 01:53:13 ...

    kernel:[  504.497976] be00: 00000000 c025f70d 00000000 c0a03f88 ed1dbe20 00000801 e8f40920 c020a469

     

    Message from syslogd@localhost at Nov  9 01:53:13 ...

    kernel:[  504.506172] be20: 00000001 e8f40920 00000001 ed1dbe5c 00000000 c01ff0d5 c01ff071 c0a03f88

     

    Message from syslogd@localhost at Nov  9 01:53:13 ...

    kernel:[  504.514369] be40: e8f40920 ece81e10 c01ff071 c0200643 5be4e888 2ea0b032 e8f40a14 5be4e888

     

    Message from syslogd@localhost at Nov  9 01:53:13 ...

    kernel:[  504.522564] be60: 2ea0b032 95b4ce29 00000000 00000029 00000000 ed1dbef0 00000000 c01a6ab5

     

    Message from syslogd@localhost at Nov  9 01:53:13 ...

    kernel:[  504.530761] be80: 00000001 c074e390 e8f40920 00000001 0000226c 00000029 ffffe000 00000029

     

    Message from syslogd@localhost at Nov  9 01:53:13 ...

    kernel:[  504.538957] bea0: eccae8a8 00000001 00000000 00000029 00080001 014000c0 00000004 eccae840

     

    Message from syslogd@localhost at Nov  9 01:53:13 ...

    kernel:[  504.547153] bec0: 00000000 00000000 00000000 c0a03f88 ed1dbf78 00000029 00000000 c01e9529

     

    Message from syslogd@localhost at Nov  9 01:53:13 ...

    kernel:[  504.555349] bee0: 00000029 00000001 01218988 00000029 00000000 00000000 00000000 ed1dbef0

     

    Message from syslogd@localhost at Nov  9 01:53:13 ...

    kernel:[  504.563545] bf00: 00000000 95b4ce29 eccae840 00000000 00000029 00000000 00000000 00000000

     

    Message from syslogd@localhost at Nov  9 01:53:13 ...

    kernel:[  504.571740] bf20: 00000000 00000000 00000000 95b4ce29 00000000 00000000 01218988 eccae840

     

    Message from syslogd@localhost at Nov  9 01:53:13 ...

    kernel:[  504.579936] bf40: ffffe000 ed1dbf78 00000029 c01eaf13 00000000 00000000 000003e8 c0a03f88

     

    Message from syslogd@localhost at Nov  9 01:53:13 ...

    kernel:[  504.588133] bf60: eccae840 00000000 00000000 eccae840 01218988 c01eb24f 00000000 00000000

     

    Message from syslogd@localhost at Nov  9 01:53:13 ...

    kernel:[  504.596329] bf80: 5a7fb644 95b4ce29 01a03b98 00000029 00000029 00000003 c01065c4 ed1da000

     

    Message from syslogd@localhost at Nov  9 01:53:13 ...

    kernel:[  504.604526] bfa0: 00000000 c01063c1 01a03b98 00000029 00000003 01218988 00000029 00000000

     

    Message from syslogd@localhost at Nov  9 01:53:13 ...

    kernel:[  504.612722] bfc0: 01a03b98 00000029 00000029 00000003 00000000 00000000 00000000 00000000

     

    Message from syslogd@localhost at Nov  9 01:53:13 ...

    kernel:[  504.620918] bfe0: 00000000 beecc234 004b82f9 b6f25a76 20000030 00000003 00000000 00000000

     

    Message from syslogd@localhost at Nov  9 01:53:13 ...

    kernel:[  504.743011] Code: 2b00 d1d1 de02 6aa2 (6853) 3301

     

    [  791.473852] EXT4-fs (dm-0): error count since last fsck: 2

    [  791.479365] EXT4-fs (dm-0): initial error at time 1541609141: mb_free_blocks:1469: block 2153233564

    [  791.488465] EXT4-fs (dm-0): last error at time 1541609141: ext4_mb_generate_buddy:757

     

  9. Thank you for the quick reply.

    • I confirm there is no mtdblock0 device listed by lsblk.
    • My Helios is fitted with 4x WD100EFAX HDDs, each rated for 5V/400mA, 12V/550mA. The Helios itself is powered by a 12V 8A brick.
    • I tried copying files over SMB with no devices connected to the USB ports, with the same result: one or a few files can be copied without issues, however and attempt to copy a folder crashes the system.
    • armbianmonitor -u output is here: http://ix.io/1reV
  10. On 12/27/2017 at 2:21 AM, gprovost said:

    Known Issues :

    • During SATA heavy load, accessing SPI NOR Flash will generate ATA errors. Temporary fix : Disable SPI NOR flash.

    Hello,

    Is there an easy way of disabling SPI to get rid of ATA errors?

    I am running OMV4 on a pre-compiled armbian stretch. When I try backing up the RAID array,  either to a locally connected USB drive using rsync, or over network using SMB, after copying a few files, I end up with ATA errors, segmentation faults, or system crashes.

×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines