Jump to content

NotAnExpert

Members
  • Posts

    5
  • Joined

  • Last visited

Reputation Activity

  1. Like
    NotAnExpert got a reaction from aprayoga in Missing RAID array after power loss   
    Thanks!
     
    Ran the commands, the non-forced assembly failed but the forced fixed the issue. I have been checking files and folders and have not found anything wrong yet.
     
    Appreciate your help and detailed instructions!
     
    mdadm --stop /dev/md0
    mdadm: stopped /dev/md0
     
     
    mdadm /dev/md0 --assemble /dev/sd[abcde]
    mdadm: /dev/md0 assembled from 2 drives - not enough to start the array
     
     
    mdadm /dev/md0 --assemble --force /dev/sd[abcde]
    mdadm: forcing event count in /dev/sdb(1) from 26341 upto 26352
    mdadm: forcing event count in /dev/sdd(3) from 26341 upto 26352
    mdadm: forcing event count in /dev/sde(4) from 26341 upto 26352
    mdadm: clearing FAULTY flag for device 3 in /dev/md0 for /dev/sdd
    mdadm: clearing FAULTY flag for device 4 in /dev/md0 for /dev/sde
    mdadm: Marking array /dev/md0 as 'clean'
    mdadm: /dev/md0 has been started with 5 drives.
     
    cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md0 : active (auto-read-only) raid6 sda[0] sde[4] sdd[3] sdc[2] sdb[1]
          35156259840 blocks super 1.2 level 6, 512k chunk, algorithm 2 [5/5] [UUUUU]
          bitmap: 0/88 pages [0KB], 65536KB chunk
     
    unused devices: <none>
     
    mdadm -D /dev/md0
    /dev/md0:
               Version : 1.2
         Creation Time : Thu Nov  5 22:00:27 2020
            Raid Level : raid6
            Array Size : 35156259840 (33527.62 GiB 36000.01 GB)
         Used Dev Size : 11718753280 (11175.87 GiB 12000.00 GB)
          Raid Devices : 5
         Total Devices : 5
           Persistence : Superblock is persistent
     
         Intent Bitmap : Internal
     
           Update Time : Thu Nov 26 07:13:29 2020
                 State : clean
        Active Devices : 5
       Working Devices : 5
        Failed Devices : 0
         Spare Devices : 0
     
                Layout : left-symmetric
            Chunk Size : 512K
     
    Consistency Policy : bitmap
     
                  Name : helios64:0  (local to host helios64)
                  UUID : f2188013:e3cdc6dd:c8a55f0d:d0e9c602
                Events : 26352
     
        Number   Major   Minor   RaidDevice State
           0       8        0        0      active sync   /dev/sda
           1       8       16        1      active sync   /dev/sdb
           2       8       32        2      active sync   /dev/sdc
           3       8       48        3      active sync   /dev/sdd
           4       8       64        4      active sync   /dev/sde
     
    =============================================================
     
    After running commands I went back to OMV web UI and raid was listed with clean state.
     
    Under filesystem I selected the raid6 “data” and mounted it.
     
    Added shares to Samba and all data is accessible.
     
  2. Like
    NotAnExpert got a reaction from gprovost in Missing RAID array after power loss   
    Thanks!
     
    Ran the commands, the non-forced assembly failed but the forced fixed the issue. I have been checking files and folders and have not found anything wrong yet.
     
    Appreciate your help and detailed instructions!
     
    mdadm --stop /dev/md0
    mdadm: stopped /dev/md0
     
     
    mdadm /dev/md0 --assemble /dev/sd[abcde]
    mdadm: /dev/md0 assembled from 2 drives - not enough to start the array
     
     
    mdadm /dev/md0 --assemble --force /dev/sd[abcde]
    mdadm: forcing event count in /dev/sdb(1) from 26341 upto 26352
    mdadm: forcing event count in /dev/sdd(3) from 26341 upto 26352
    mdadm: forcing event count in /dev/sde(4) from 26341 upto 26352
    mdadm: clearing FAULTY flag for device 3 in /dev/md0 for /dev/sdd
    mdadm: clearing FAULTY flag for device 4 in /dev/md0 for /dev/sde
    mdadm: Marking array /dev/md0 as 'clean'
    mdadm: /dev/md0 has been started with 5 drives.
     
    cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md0 : active (auto-read-only) raid6 sda[0] sde[4] sdd[3] sdc[2] sdb[1]
          35156259840 blocks super 1.2 level 6, 512k chunk, algorithm 2 [5/5] [UUUUU]
          bitmap: 0/88 pages [0KB], 65536KB chunk
     
    unused devices: <none>
     
    mdadm -D /dev/md0
    /dev/md0:
               Version : 1.2
         Creation Time : Thu Nov  5 22:00:27 2020
            Raid Level : raid6
            Array Size : 35156259840 (33527.62 GiB 36000.01 GB)
         Used Dev Size : 11718753280 (11175.87 GiB 12000.00 GB)
          Raid Devices : 5
         Total Devices : 5
           Persistence : Superblock is persistent
     
         Intent Bitmap : Internal
     
           Update Time : Thu Nov 26 07:13:29 2020
                 State : clean
        Active Devices : 5
       Working Devices : 5
        Failed Devices : 0
         Spare Devices : 0
     
                Layout : left-symmetric
            Chunk Size : 512K
     
    Consistency Policy : bitmap
     
                  Name : helios64:0  (local to host helios64)
                  UUID : f2188013:e3cdc6dd:c8a55f0d:d0e9c602
                Events : 26352
     
        Number   Major   Minor   RaidDevice State
           0       8        0        0      active sync   /dev/sda
           1       8       16        1      active sync   /dev/sdb
           2       8       32        2      active sync   /dev/sdc
           3       8       48        3      active sync   /dev/sdd
           4       8       64        4      active sync   /dev/sde
     
    =============================================================
     
    After running commands I went back to OMV web UI and raid was listed with clean state.
     
    Under filesystem I selected the raid6 “data” and mounted it.
     
    Added shares to Samba and all data is accessible.
     
  3. Like
    NotAnExpert reacted to gprovost in Missing RAID array after power loss   
    At first glance, things look pretty good so it should be easy reassemble the array.
     
    The reason it didn' t happen automatically it's because the event count between sda, sdc is different from sdb, sdd, sde. Fortunately the difference is very small, so data on the RAID should still be ok.
     
    First you need to stop the array
     
    mdadm --stop /dev/md0
     
    Let's try to re assemble without --force option
     
    mdadm /dev/md0 --assemble /dev/sd[abcde]
     
    Let us know if it works. If yes then you array should start the syncing process (cat /prod/mdstat), you could share the output of mdadm -D /dev/md0
     
    If if fails, please show the output. Next step, if previous one failed, it's to try assemble with --force option.  Please take note that if --force is used you might end up with few file corrupted (whatever writing operation that happen in the difference of event count, in your case the difference of event count is very small).
     
    mdadm /dev/md0 --assemble --force /dev/sd[abcde]
     
    Same here, share output of cat /prod/mdstat and mdadm -D /dev/md0
  4. Like
    NotAnExpert got a reaction from gprovost in Missing RAID array after power loss   
    This is the output:
     
    mdadm --examine /dev/sd[abcde]
     
    /dev/sda:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x1
         Array UUID : f2188013:e3cdc6dd:c8a55f0d:d0e9c602
               Name : helios64:0  (local to host helios64)
      Creation Time : Thu Nov  5 22:00:27 2020
         Raid Level : raid6
       Raid Devices : 5
     
    Avail Dev Size : 23437506560 (11175.87 GiB 12000.00 GB)
         Array Size : 35156259840 (33527.62 GiB 36000.01 GB)
        Data Offset : 264192 sectors
       Super Offset : 8 sectors
       Unused Space : before=264112 sectors, after=0 sectors
              State : clean
        Device UUID : ae15b2cf:d55f6fcf:1e4ef3c5:1824b2c7
     
    Internal Bitmap : 8 sectors from superblock
        Update Time : Thu Nov 26 07:13:29 2020
      Bad Block Log : 512 entries available at offset 56 sectors
           Checksum : 1e6baa82 - correct
             Events : 26352
     
             Layout : left-symmetric
         Chunk Size : 512K
     
       Device Role : Active device 0
       Array State : AAA.. ('A' == active, '.' == missing, 'R' == replacing)
    /dev/sdb:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x1
         Array UUID : f2188013:e3cdc6dd:c8a55f0d:d0e9c602
               Name : helios64:0  (local to host helios64)
      Creation Time : Thu Nov  5 22:00:27 2020
         Raid Level : raid6
       Raid Devices : 5
     
    Avail Dev Size : 23437506560 (11175.87 GiB 12000.00 GB)
         Array Size : 35156259840 (33527.62 GiB 36000.01 GB)
        Data Offset : 264192 sectors
       Super Offset : 8 sectors
       Unused Space : before=264112 sectors, after=0 sectors
              State : clean
        Device UUID : e6c4d356:a1021369:2fcd3661:deaa4406
     
    Internal Bitmap : 8 sectors from superblock
        Update Time : Wed Nov 25 23:33:03 2020
      Bad Block Log : 512 entries available at offset 56 sectors
           Checksum : 1909966b - correct
             Events : 26341
     
             Layout : left-symmetric
         Chunk Size : 512K
     
       Device Role : Active device 1
       Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)
    /dev/sdc:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x1
         Array UUID : f2188013:e3cdc6dd:c8a55f0d:d0e9c602
               Name : helios64:0  (local to host helios64)
      Creation Time : Thu Nov  5 22:00:27 2020
         Raid Level : raid6
       Raid Devices : 5
     
    Avail Dev Size : 23437506560 (11175.87 GiB 12000.00 GB)
         Array Size : 35156259840 (33527.62 GiB 36000.01 GB)
        Data Offset : 264192 sectors
       Super Offset : 8 sectors
       Unused Space : before=264112 sectors, after=0 sectors
              State : clean
        Device UUID : 31f197dd:6f6d0545:fbc627a6:53582fd4
     
    Internal Bitmap : 8 sectors from superblock
        Update Time : Thu Nov 26 07:13:29 2020
      Bad Block Log : 512 entries available at offset 56 sectors
           Checksum : 8e9940b8 - correct
             Events : 26352
     
             Layout : left-symmetric
         Chunk Size : 512K
     
       Device Role : Active device 2
       Array State : AAA.. ('A' == active, '.' == missing, 'R' == replacing)
    /dev/sdd:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x1
         Array UUID : f2188013:e3cdc6dd:c8a55f0d:d0e9c602
               Name : helios64:0  (local to host helios64)
      Creation Time : Thu Nov  5 22:00:27 2020
         Raid Level : raid6
       Raid Devices : 5
     
    Avail Dev Size : 23437506560 (11175.87 GiB 12000.00 GB)
         Array Size : 35156259840 (33527.62 GiB 36000.01 GB)
        Data Offset : 264192 sectors
       Super Offset : 8 sectors
       Unused Space : before=264112 sectors, after=0 sectors
              State : clean
        Device UUID : 3296b043:2025da1d:edc6e9eb:99e57e34
     
    Internal Bitmap : 8 sectors from superblock
        Update Time : Wed Nov 25 23:33:03 2020
      Bad Block Log : 512 entries available at offset 56 sectors
           Checksum : 739abeb1 - correct
             Events : 26341
     
             Layout : left-symmetric
         Chunk Size : 512K
     
       Device Role : Active device 3
       Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)
    /dev/sde:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x1
         Array UUID : f2188013:e3cdc6dd:c8a55f0d:d0e9c602
               Name : helios64:0  (local to host helios64)
      Creation Time : Thu Nov  5 22:00:27 2020
         Raid Level : raid6
       Raid Devices : 5
     
    Avail Dev Size : 23437506560 (11175.87 GiB 12000.00 GB)
         Array Size : 35156259840 (33527.62 GiB 36000.01 GB)
        Data Offset : 264192 sectors
       Super Offset : 8 sectors
       Unused Space : before=264112 sectors, after=0 sectors
              State : clean
        Device UUID : ecf702ec:8dec71e2:a7f465fa:86b78694
     
    Internal Bitmap : 8 sectors from superblock
        Update Time : Wed Nov 25 23:33:03 2020
      Bad Block Log : 512 entries available at offset 56 sectors
           Checksum : 4f08e782 - correct
             Events : 26341
     
             Layout : left-symmetric
         Chunk Size : 512K
     
       Device Role : Active device 4
       Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)
     
    =======
     
    Regarding what I tried
     
    When I noticed I couldn’t access the files and folders on my computer I tried other computers. Then I went to OMV front end and changed the workgroup thinking it was just a Windows issue and when that didn’t work I removed the shares from Samba, did not delete them from Access Right Management > Shared Folders.
     
    At that point I realized the filesystem was missing and mounting was disabled. I did nothing else either in OPV web interface or through SSH commands except for the info and commands you have requested.
     
    Thank you!
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines