Jump to content

Missing RAID array after power loss


NotAnExpert

Recommended Posts

Last night we lost power to the whole house, I had to manually start the NAS and when I tried to access my data there was nothing but empty shares. After checking OpenMediaVault Web UI and checking around, the file system status shows as "missing" and there is nothing under RAID Management and all the disks show properly under Storage > Disks.

 

The NAS is set up as a Raid 6 on ext4 with five 12 TB drives.

 

For starters, I wonder why the internal battery did not graciously shut down Helios64 as to avoid this from happening given that power outages are a rather common issue.

 

Then, how do I get my file system back? After browsing some forums I noticed this info may seem important:

 

lsblk

NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT

sda            8:0    0 10.9T  0 disk

sdb            8:16   0 10.9T  0 disk

sdc            8:32   0 10.9T  0 disk

sdd            8:48   0 10.9T  0 disk

sde            8:64   0 10.9T  0 disk

mmcblk1      179:0    0 14.6G  0 disk

└─mmcblk1p1  179:1    0 14.4G  0 part /

mmcblk1boot0 179:32   0    4M  1 disk

mmcblk1boot1 179:64   0    4M  1 disk

 

cat /etc/fstab

UUID=dabb3dbf-8631-4051-a032-c0b97eb285bd / ext4 defaults,noatime,nodiratime,commit=600,errors=remount-ro 0 1
tmpfs /tmp tmpfs defaults,nosuid 0 0
# >>> [openmediavault]
/dev/disk/by-label/data        /srv/dev-disk-by-label-data    ext4    defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl0 2
# <<< [openmediavault]

 

cat /etc/mdadm/mdadm.conf

# This file is auto-generated by openmediavault (https://www.openmediavault.org)
# WARNING: Do not edit this file, your changes will get lost.

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts

# definitions of existing MD arrays
ARRAY /dev/md/helios64:0 metadata=1.2 name=helios64:0 UUID=f2188013:e3cdc6dd:c8a55f0d:d0e9c602

 

cat /proc/mdstat

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

md0 : inactive sde[4](S) sdc[2](S) sda[0](S) sdd[3](S) sdb[1](S)

      58593766400 blocks super 1.2

       

unused devices: <none>

 

fsck /dev/disk/by-label/

fsck from util-linux 2.33.1

e2fsck 1.44.5 (15-Dec-2018)

fsck.ext2: No such file or directory while trying to open /dev/disk/by-label/

Possibly non-existent device?

 

I don't want to lose data even if I have it backed up so I don't want to try to fix it blindly. And given that this could happen again to others and myself it would be very helpful to find a recovery method that others could use.

Link to comment
Share on other sites

  • gprovost changed the title to Missing RAID array after power loss

From the look of it your raid array is in inactive state.

 

 md0 : inactive sde[4](S) sdc[2](S) sda[0](S) sdd[3](S) sdb[1](S) 

 

Can you show the output of the following command : mdadm -D /dev/md0

 

Also send us the link created by the following command : armbianmonitor -u

 

3 hours ago, NotAnExpert said:

For starters, I wonder why the internal battery did not graciously shut down Helios64 as to avoid this from happening given that power outages are a rather common issue.

 

The graceful shutdown after 10min on battery was unfortunately not implemented yet :-/ It is in the upcoming update.

Link to comment
Share on other sites

mdadm -D /dev/md0

/dev/md0:
           Version : 1.2
        Raid Level : raid0
     Total Devices : 5
       Persistence : Superblock is persistent

             State : inactive
   Working Devices : 5

              Name : helios64:0  (local to host helios64)
              UUID : f2188013:e3cdc6dd:c8a55f0d:d0e9c602
            Events : 26341

    Number   Major   Minor   RaidDevice

       -       8       64        -        /dev/sde
       -       8       32        -        /dev/sdc
       -       8        0        -        /dev/sda
       -       8       48        -        /dev/sdd
       -       8       16        -        /dev/sdb
 

armbianmonitor -u

http://ix.io/2FKq

 

Got a UPS for it in the meantime.

Link to comment
Share on other sites

Thanks!

 

I tried and got the following:

 

mdadm --run /dev/md0

mdadm: failed to start array /dev/md/helios64:0: Input/output error


mdadm -D /dev/md0

/dev/md0:
           Version : 1.2
     Creation Time : Thu Nov  5 22:00:27 2020
        Raid Level : raid6
     Used Dev Size : 18446744073709551615
      Raid Devices : 5
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Thu Nov 26 07:13:29 2020
             State : active, FAILED, Not Started 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : unknown

              Name : helios64:0  (local to host helios64)
              UUID : f2188013:e3cdc6dd:c8a55f0d:d0e9c602
            Events : 26352

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       -       0        0        1      removed
       -       0        0        2      removed
       -       0        0        3      removed
       -       0        0        4      removed

       -       8       32        2      sync   /dev/sdc
       -       8        0        0      sync   /dev/sda


cat /proc/mdstat

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : inactive sdc[2] sda[0]
      23437506560 blocks super 1.2
       
unused devices: <none>
 

Link to comment
Share on other sites

Can you do : mdadm --examine /dev/sd[abcde]  and post output here

 

That will define what we can do next : force re adding device to the array or recreating it (last resort). https://raid.wiki.kernel.org/index.php/RAID_Recovery

 

Have you tried something on your own when you realized the array was missing ? If yes I need to know which command you did or what action you did in OMV.

 

 

Link to comment
Share on other sites

This is the output:

 

mdadm --examine /dev/sd[abcde]

 

/dev/sda:

          Magic : a92b4efc

        Version : 1.2

    Feature Map : 0x1

     Array UUID : f2188013:e3cdc6dd:c8a55f0d:d0e9c602

           Name : helios64:0  (local to host helios64)

  Creation Time : Thu Nov  5 22:00:27 2020

     Raid Level : raid6

   Raid Devices : 5

 

Avail Dev Size : 23437506560 (11175.87 GiB 12000.00 GB)

     Array Size : 35156259840 (33527.62 GiB 36000.01 GB)

    Data Offset : 264192 sectors

   Super Offset : 8 sectors

   Unused Space : before=264112 sectors, after=0 sectors

          State : clean

    Device UUID : ae15b2cf:d55f6fcf:1e4ef3c5:1824b2c7

 

Internal Bitmap : 8 sectors from superblock

    Update Time : Thu Nov 26 07:13:29 2020

  Bad Block Log : 512 entries available at offset 56 sectors

       Checksum : 1e6baa82 - correct

         Events : 26352

 

         Layout : left-symmetric

     Chunk Size : 512K

 

   Device Role : Active device 0

   Array State : AAA.. ('A' == active, '.' == missing, 'R' == replacing)

/dev/sdb:

          Magic : a92b4efc

        Version : 1.2

    Feature Map : 0x1

     Array UUID : f2188013:e3cdc6dd:c8a55f0d:d0e9c602

           Name : helios64:0  (local to host helios64)

  Creation Time : Thu Nov  5 22:00:27 2020

     Raid Level : raid6

   Raid Devices : 5

 

Avail Dev Size : 23437506560 (11175.87 GiB 12000.00 GB)

     Array Size : 35156259840 (33527.62 GiB 36000.01 GB)

    Data Offset : 264192 sectors

   Super Offset : 8 sectors

   Unused Space : before=264112 sectors, after=0 sectors

          State : clean

    Device UUID : e6c4d356:a1021369:2fcd3661:deaa4406

 

Internal Bitmap : 8 sectors from superblock

    Update Time : Wed Nov 25 23:33:03 2020

  Bad Block Log : 512 entries available at offset 56 sectors

       Checksum : 1909966b - correct

         Events : 26341

 

         Layout : left-symmetric

     Chunk Size : 512K

 

   Device Role : Active device 1

   Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)

/dev/sdc:

          Magic : a92b4efc

        Version : 1.2

    Feature Map : 0x1

     Array UUID : f2188013:e3cdc6dd:c8a55f0d:d0e9c602

           Name : helios64:0  (local to host helios64)

  Creation Time : Thu Nov  5 22:00:27 2020

     Raid Level : raid6

   Raid Devices : 5

 

Avail Dev Size : 23437506560 (11175.87 GiB 12000.00 GB)

     Array Size : 35156259840 (33527.62 GiB 36000.01 GB)

    Data Offset : 264192 sectors

   Super Offset : 8 sectors

   Unused Space : before=264112 sectors, after=0 sectors

          State : clean

    Device UUID : 31f197dd:6f6d0545:fbc627a6:53582fd4

 

Internal Bitmap : 8 sectors from superblock

    Update Time : Thu Nov 26 07:13:29 2020

  Bad Block Log : 512 entries available at offset 56 sectors

       Checksum : 8e9940b8 - correct

         Events : 26352

 

         Layout : left-symmetric

     Chunk Size : 512K

 

   Device Role : Active device 2

   Array State : AAA.. ('A' == active, '.' == missing, 'R' == replacing)

/dev/sdd:

          Magic : a92b4efc

        Version : 1.2

    Feature Map : 0x1

     Array UUID : f2188013:e3cdc6dd:c8a55f0d:d0e9c602

           Name : helios64:0  (local to host helios64)

  Creation Time : Thu Nov  5 22:00:27 2020

     Raid Level : raid6

   Raid Devices : 5

 

Avail Dev Size : 23437506560 (11175.87 GiB 12000.00 GB)

     Array Size : 35156259840 (33527.62 GiB 36000.01 GB)

    Data Offset : 264192 sectors

   Super Offset : 8 sectors

   Unused Space : before=264112 sectors, after=0 sectors

          State : clean

    Device UUID : 3296b043:2025da1d:edc6e9eb:99e57e34

 

Internal Bitmap : 8 sectors from superblock

    Update Time : Wed Nov 25 23:33:03 2020

  Bad Block Log : 512 entries available at offset 56 sectors

       Checksum : 739abeb1 - correct

         Events : 26341

 

         Layout : left-symmetric

     Chunk Size : 512K

 

   Device Role : Active device 3

   Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)

/dev/sde:

          Magic : a92b4efc

        Version : 1.2

    Feature Map : 0x1

     Array UUID : f2188013:e3cdc6dd:c8a55f0d:d0e9c602

           Name : helios64:0  (local to host helios64)

  Creation Time : Thu Nov  5 22:00:27 2020

     Raid Level : raid6

   Raid Devices : 5

 

Avail Dev Size : 23437506560 (11175.87 GiB 12000.00 GB)

     Array Size : 35156259840 (33527.62 GiB 36000.01 GB)

    Data Offset : 264192 sectors

   Super Offset : 8 sectors

   Unused Space : before=264112 sectors, after=0 sectors

          State : clean

    Device UUID : ecf702ec:8dec71e2:a7f465fa:86b78694

 

Internal Bitmap : 8 sectors from superblock

    Update Time : Wed Nov 25 23:33:03 2020

  Bad Block Log : 512 entries available at offset 56 sectors

       Checksum : 4f08e782 - correct

         Events : 26341

 

         Layout : left-symmetric

     Chunk Size : 512K

 

   Device Role : Active device 4

   Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)

 

=======

 

Regarding what I tried

 

When I noticed I couldn’t access the files and folders on my computer I tried other computers. Then I went to OMV front end and changed the workgroup thinking it was just a Windows issue and when that didn’t work I removed the shares from Samba, did not delete them from Access Right Management > Shared Folders.

 

At that point I realized the filesystem was missing and mounting was disabled. I did nothing else either in OPV web interface or through SSH commands except for the info and commands you have requested.

 

Thank you!

Link to comment
Share on other sites

At first glance, things look pretty good so it should be easy reassemble the array.

 

The reason it didn' t happen automatically it's because the event count between sda, sdc is different from sdb, sdd, sde. Fortunately the difference is very small, so data on the RAID should still be ok.

 

First you need to stop the array

 

mdadm --stop /dev/md0

 

Let's try to re assemble without --force option

 

mdadm /dev/md0 --assemble /dev/sd[abcde]

 

Let us know if it works. If yes then you array should start the syncing process (cat /prod/mdstat), you could share the output of mdadm -D /dev/md0

 

If if fails, please show the output. Next step, if previous one failed, it's to try assemble with --force option.  Please take note that if --force is used you might end up with few file corrupted (whatever writing operation that happen in the difference of event count, in your case the difference of event count is very small).

 

mdadm /dev/md0 --assemble --force /dev/sd[abcde]

 

Same here, share output of cat /prod/mdstat and mdadm -D /dev/md0

Link to comment
Share on other sites

Thanks!

 

Ran the commands, the non-forced assembly failed but the forced fixed the issue. I have been checking files and folders and have not found anything wrong yet.

 

Appreciate your help and detailed instructions!

 

mdadm --stop /dev/md0

mdadm: stopped /dev/md0

 

 

mdadm /dev/md0 --assemble /dev/sd[abcde]

mdadm: /dev/md0 assembled from 2 drives - not enough to start the array

 

 

mdadm /dev/md0 --assemble --force /dev/sd[abcde]

mdadm: forcing event count in /dev/sdb(1) from 26341 upto 26352

mdadm: forcing event count in /dev/sdd(3) from 26341 upto 26352

mdadm: forcing event count in /dev/sde(4) from 26341 upto 26352

mdadm: clearing FAULTY flag for device 3 in /dev/md0 for /dev/sdd

mdadm: clearing FAULTY flag for device 4 in /dev/md0 for /dev/sde

mdadm: Marking array /dev/md0 as 'clean'

mdadm: /dev/md0 has been started with 5 drives.

 

cat /proc/mdstat

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

md0 : active (auto-read-only) raid6 sda[0] sde[4] sdd[3] sdc[2] sdb[1]

      35156259840 blocks super 1.2 level 6, 512k chunk, algorithm 2 [5/5] [UUUUU]

      bitmap: 0/88 pages [0KB], 65536KB chunk

 

unused devices: <none>

 

mdadm -D /dev/md0

/dev/md0:

           Version : 1.2

     Creation Time : Thu Nov  5 22:00:27 2020

        Raid Level : raid6

        Array Size : 35156259840 (33527.62 GiB 36000.01 GB)

     Used Dev Size : 11718753280 (11175.87 GiB 12000.00 GB)

      Raid Devices : 5

     Total Devices : 5

       Persistence : Superblock is persistent

 

     Intent Bitmap : Internal

 

       Update Time : Thu Nov 26 07:13:29 2020

             State : clean

    Active Devices : 5

   Working Devices : 5

    Failed Devices : 0

     Spare Devices : 0

 

            Layout : left-symmetric

        Chunk Size : 512K

 

Consistency Policy : bitmap

 

              Name : helios64:0  (local to host helios64)

              UUID : f2188013:e3cdc6dd:c8a55f0d:d0e9c602

            Events : 26352

 

    Number   Major   Minor   RaidDevice State

       0       8        0        0      active sync   /dev/sda

       1       8       16        1      active sync   /dev/sdb

       2       8       32        2      active sync   /dev/sdc

       3       8       48        3      active sync   /dev/sdd

       4       8       64        4      active sync   /dev/sde

 

=============================================================

 

After running commands I went back to OMV web UI and raid was listed with clean state.

 

Under filesystem I selected the raid6 “data” and mounted it.

 

Added shares to Samba and all data is accessible.

 

Link to comment
Share on other sites

Hi

I had the same situation except there was no power loss.

At the morning I saw the red led blinking and there was no access to anything.

I have raid6 with 4 drives + one missing (I need to buy one and add)

 mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
        Raid Level : raid0
     Total Devices : 5
       Persistence : Superblock is persistent

             State : inactive
   Working Devices : 5

              Name : helios64:0  (local to host helios64)
              UUID : 7530fb03:c90e2f01:e0d78a6e:14097e55
            Events : 107318

    Number   Major   Minor   RaidDevice

       -       8        1        -        /dev/sda1
       -       8       65        -        /dev/sde1
       -       8       49        -        /dev/sdd1
       -       8       33        -        /dev/sdc1
       -       8       17        -        /dev/sdb1
 mdadm -E /dev/sda1
/dev/sda1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 7530fb03:c90e2f01:e0d78a6e:14097e55
           Name : helios64:0  (local to host helios64)
  Creation Time : Sun Dec 27 16:58:41 2020
     Raid Level : raid6
   Raid Devices : 5

 Avail Dev Size : 11720583168 (5588.81 GiB 6000.94 GB)
     Array Size : 17580874752 (16766.43 GiB 18002.82 GB)
    Data Offset : 264192 sectors
   Super Offset : 8 sectors
   Unused Space : before=264112 sectors, after=0 sectors
          State : active
    Device UUID : 78ccd511:bbb3a836:60faeece:2fbccc9c

Internal Bitmap : 8 sectors from superblock
    Update Time : Tue Dec 29 10:17:01 2020
  Bad Block Log : 512 entries available at offset 32 sectors
       Checksum : f954d241 - correct
         Events : 107318

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AA.AA ('A' == active, '.' == missing, 'R' == replacing)

After stopping and reassembling the array with the force option I have access to files.

 cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid6 sda1[0] sde1[4] sdd1[3] sdb1[1]
      17580874752 blocks super 1.2 level 6, 512k chunk, algorithm 2 [5/4] [UU_UU]
      [>....................]  resync =  0.2% (12862976/5860291584) finish=1082.6min speed=90012K/sec
      bitmap: 44/44 pages [176KB], 65536KB chunk

unused devices: <none>

1. I don't understand what MD is resyncing, there were 4 active drives and are 4 active drives - nothing changed, so why resyncing?

2. I'm really concerned about this situation - there was no reason to that situation, there was no power outage nor any other reason I see, it just stopped working at the night

Link to comment
Share on other sites

Did you dig into the syslog history to see what was the event that resulted in the array to be inactive ?

Since you are running at RAID6 with one missing drive, as soon as you will have one drive that drops, the raid will move to inactive state by safety. So maybe look at ATA event.

Link to comment
Share on other sites

Good morning. Sorry for my English.

I have the exact same problem, but with a difference when I use the commands: 

mdadm /dev/md0 --assemble /dev/sd[abc]

 

root@Server-Archivos:~# mdadm /dev/md127 --assemble /dev/sd[abc]
mdadm: /dev/sda is busy - skipping
mdadm: /dev/sdb is busy - skipping
mdadm: /dev/sdc is busy - skipping
 

I leave the same information to know if you can help me.

 

Spoiler

Linux Server-Archivos 4.19.0-0.bpo.5-amd64 #1 SMP Debian 4.19.37-4~bpo9+1 (2019-                                              06-19) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Tue Feb  9 21:23:47 2021 from 10.200.0.1


root@Server-Archivos:~# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 465.8G  0 disk
sdb      8:16   0 465.8G  0 disk
sdc      8:32   0 465.8G  0 disk
sdd      8:48   0 465.8G  0 disk
├─sdd1   8:49   0   512M  0 part /boot/efi
├─sdd2   8:50   0 459.3G  0 part /
└─sdd3   8:51   0     6G  0 part [SWAP]


root@Server-Archivos:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : inactive sdb[1](S) sda[0](S) sdc[2](S)
      1464766536 blocks super 1.2

unused devices: <none>

  
root@Server-Archivos:~# mdadm -D /dev/md127
/dev/md127:
        Version : 1.2
     Raid Level : raid0
  Total Devices : 3
    Persistence : Superblock is persistent

          State : inactive

           Name : Server-Archivos:Datos  (local to host Server-Archivos)
           UUID : c5da7d95:c8cd6594:261f77d3:6d594c7c
         Events : 5169

    Number   Major   Minor   RaidDevice

       -       8       32        -        /dev/sdc
       -       8        0        -        /dev/sda
       -       8       16        -        /dev/sdb

  
root@Server-Archivos:~# mdadm -D /dev/md127
/dev/md127:
        Version : 1.2
     Raid Level : raid0
  Total Devices : 3
    Persistence : Superblock is persistent

          State : inactive

           Name : Server-Archivos:Datos  (local to host Server-Archivos)
           UUID : c5da7d95:c8cd6594:261f77d3:6d594c7c
         Events : 5169

    Number   Major   Minor   RaidDevice

       -       8       32        -        /dev/sdc
       -       8        0        -        /dev/sda
       -       8       16        -        /dev/sdb
  
  
root@Server-Archivos:~# armbianmonitor -u
-bash: armbianmonitor: command not found
  
  
root@Server-Archivos:~# mdadm --run /dev/md127
mdadm: failed to start array /dev/md/Server-Archivos:Datos: Input/output error
  
  
root@Server-Archivos:~# mdadm -D /dev/md127
/dev/md127:
        Version : 1.2
  Creation Time : Fri Jul 26 09:15:43 2019
     Raid Level : raid5
  Used Dev Size : 488255488 (465.64 GiB 499.97 GB)
   Raid Devices : 3
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Thu Nov  3 14:16:56 2016
          State : active, degraded, Not Started
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : Server-Archivos:Datos  (local to host Server-Archivos)
           UUID : c5da7d95:c8cd6594:261f77d3:6d594c7c
         Events : 5169

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       16        1      active sync   /dev/sdb
       2       8       32        2      active sync   /dev/sdc
  
  
root@Server-Archivos:~# cat /proc /mdstat
cat: /proc: Is a directory
cat: /mdstat: No such file or directory
  
  
root@Server-Archivos:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : inactive sdb[1] sdc[2]
      976511024 blocks super 1.2

unused devices: <none>
  
  
root@Server-Archivos:~# mdadm --examine /dev/sd[abc]
/dev/sda:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : c5da7d95:c8cd6594:261f77d3:6d594c7c
           Name : Server-Archivos:Datos  (local to host Server-Archivos)
  Creation Time : Fri Jul 26 09:15:43 2019
     Raid Level : raid5
   Raid Devices : 3

 Avail Dev Size : 976511024 (465.64 GiB 499.97 GB)
     Array Size : 976510976 (931.27 GiB 999.95 GB)
  Used Dev Size : 976510976 (465.64 GiB 499.97 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=48 sectors
          State : active
    Device UUID : 7a9f424a:b1acf2cf:5f503af6:8df121f2

Internal Bitmap : 8 sectors from superblock
    Update Time : Thu Nov  3 14:16:53 2016
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 1a1f3ca9 - correct
         Events : 3339

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : c5da7d95:c8cd6594:261f77d3:6d594c7c
           Name : Server-Archivos:Datos  (local to host Server-Archivos)
  Creation Time : Fri Jul 26 09:15:43 2019
     Raid Level : raid5
   Raid Devices : 3

 Avail Dev Size : 976511024 (465.64 GiB 499.97 GB)
     Array Size : 976510976 (931.27 GiB 999.95 GB)
  Used Dev Size : 976510976 (465.64 GiB 499.97 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=48 sectors
          State : active
    Device UUID : 9264e6f2:c65b4e79:7fa33f9e:911f3c73

Internal Bitmap : 8 sectors from superblock
    Update Time : Thu Nov  3 14:16:56 2016
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 953f3922 - correct
         Events : 5169

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : .AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : c5da7d95:c8cd6594:261f77d3:6d594c7c
           Name : Server-Archivos:Datos  (local to host Server-Archivos)
  Creation Time : Fri Jul 26 09:15:43 2019
     Raid Level : raid5
   Raid Devices : 3

 Avail Dev Size : 976511024 (465.64 GiB 499.97 GB)
     Array Size : 976510976 (931.27 GiB 999.95 GB)
  Used Dev Size : 976510976 (465.64 GiB 499.97 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=48 sectors
          State : active
    Device UUID : ed154b12:ee8c47ec:da3932ae:34a4aef9

Internal Bitmap : 8 sectors from superblock
    Update Time : Thu Nov  3 14:16:56 2016
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : be0236a4 - correct
         Events : 5169

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 2
   Array State : .AA ('A' == active, '.' == missing, 'R' == replacing)

  
root@Server-Archivos:~# mdadm --stop /dev/md127
mdadm: stopped /dev/md127


  
root@Server-Archivos:~# mdadm /dev/md127 --assemble /dev/sd[abc]
mdadm: /dev/sda is busy - skipping
mdadm: /dev/sdb is busy - skipping
mdadm: /dev/sdc is busy - skipping

 

 

This is the syslog.

 

Spoiler

Feb 10 08:52:33 Server-Archivos liblogging-stdlog:  [origin software="rsyslogd" swVersion="8.24.0" x-pid="654" x-info="http://www.rsyslog.com"] rsyslogd was HUPed
Feb 10 08:52:49 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 08:53:19 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 08:53:49 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 08:54:19 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 08:54:49 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 08:55:19 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 08:55:49 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 08:56:14 Server-Archivos dhclient[846]: DHCPREQUEST of 10.200.0.174 on enp0s25 to 10.200.0.1 port 67
Feb 10 08:56:14 Server-Archivos dhclient[846]: DHCPACK of 10.200.0.174 from 10.200.0.1
Feb 10 08:56:14 Server-Archivos systemd[1]: Reloading Samba SMB Daemon.
Feb 10 08:56:14 Server-Archivos systemd[1]: Reloaded Samba SMB Daemon.
Feb 10 08:56:14 Server-Archivos dhclient[846]: bound to 10.200.0.174 -- renewal in 283 seconds.
Feb 10 08:56:19 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 08:56:50 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 08:57:20 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 08:57:50 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 08:58:20 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 08:58:50 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 08:59:20 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 08:59:50 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:00:01 Server-Archivos CRON[1442]: (root) CMD (/usr/sbin/omv-mkrrdgraph >/dev/null 2>&1)
Feb 10 09:00:20 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:00:50 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:00:52 Server-Archivos systemd[1]: Starting Cleanup of Temporary Directories...
Feb 10 09:00:52 Server-Archivos systemd[1]: Started Cleanup of Temporary Directories.
Feb 10 09:00:57 Server-Archivos dhclient[846]: DHCPREQUEST of 10.200.0.174 on enp0s25 to 10.200.0.1 port 67
Feb 10 09:00:57 Server-Archivos dhclient[846]: DHCPACK of 10.200.0.174 from 10.200.0.1
Feb 10 09:00:57 Server-Archivos systemd[1]: Reloading Samba SMB Daemon.
Feb 10 09:00:57 Server-Archivos systemd[1]: Reloaded Samba SMB Daemon.
Feb 10 09:00:57 Server-Archivos dhclient[846]: bound to 10.200.0.174 -- renewal in 275 seconds.
Feb 10 09:01:20 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:01:51 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:02:21 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:02:51 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:03:21 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:03:51 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:04:21 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:04:51 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:05:21 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:05:33 Server-Archivos dhclient[846]: DHCPREQUEST of 10.200.0.174 on enp0s25 to 10.200.0.1 port 67
Feb 10 09:05:33 Server-Archivos dhclient[846]: DHCPACK of 10.200.0.174 from 10.200.0.1
Feb 10 09:05:33 Server-Archivos systemd[1]: Reloading Samba SMB Daemon.
Feb 10 09:05:33 Server-Archivos systemd[1]: Reloaded Samba SMB Daemon.
Feb 10 09:05:33 Server-Archivos dhclient[846]: bound to 10.200.0.174 -- renewal in 285 seconds.
Feb 10 09:05:51 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:06:22 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:06:52 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:07:22 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:07:32 Server-Archivos systemd[1]: Starting Daily apt download activities...
Feb 10 09:07:32 Server-Archivos systemd[1]: Started Daily apt download activities.
Feb 10 09:07:32 Server-Archivos systemd[1]: apt-daily.timer: Adding 3h 22min 25.792508s random time.
Feb 10 09:07:32 Server-Archivos systemd[1]: apt-daily.timer: Adding 4h 6min 26.131669s random time.
Feb 10 09:07:52 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:08:22 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:08:52 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:09:00 Server-Archivos systemd[1]: Starting Clean php session files...
Feb 10 09:09:00 Server-Archivos systemd[1]: Started Clean php session files.
Feb 10 09:09:01 Server-Archivos CRON[1759]: (root) CMD (  [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean; fi)
Feb 10 09:09:02 Server-Archivos systemd[1]: dev-disk-by\x2dlabel-Raid5.device: Job dev-disk-by\x2dlabel-Raid5.device/start timed out.
Feb 10 09:09:02 Server-Archivos systemd[1]: Timed out waiting for device dev-disk-by\x2dlabel-Raid5.device.
Feb 10 09:09:02 Server-Archivos systemd[1]: Dependency failed for /srv/dev-disk-by-label-Raid5.
Feb 10 09:09:02 Server-Archivos systemd[1]: srv-dev\x2ddisk\x2dby\x2dlabel\x2dRaid5.mount: Job srv-dev\x2ddisk\x2dby\x2dlabel\x2dRaid5.mount/start failed with result 'dependency'.
Feb 10 09:09:02 Server-Archivos systemd[1]: Dependency failed for File System Check on /dev/disk/by-label/Raid5.
Feb 10 09:09:02 Server-Archivos systemd[1]: systemd-fsck@dev-disk-by\x2dlabel-Raid5.service: Job systemd-fsck@dev-disk-by\x2dlabel-Raid5.service/start failed with result 'dependency'.
Feb 10 09:09:02 Server-Archivos systemd[1]: dev-disk-by\x2dlabel-Raid5.device: Job dev-disk-by\x2dlabel-Raid5.device/start failed with result 'timeout'.
Feb 10 09:09:22 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:09:52 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:10:18 Server-Archivos dhclient[846]: DHCPREQUEST of 10.200.0.174 on enp0s25 to 10.200.0.1 port 67
Feb 10 09:10:18 Server-Archivos dhclient[846]: DHCPACK of 10.200.0.174 from 10.200.0.1
Feb 10 09:10:18 Server-Archivos systemd[1]: Reloading Samba SMB Daemon.
Feb 10 09:10:18 Server-Archivos systemd[1]: Reloaded Samba SMB Daemon.
Feb 10 09:10:18 Server-Archivos dhclient[846]: bound to 10.200.0.174 -- renewal in 227 seconds.
Feb 10 09:10:22 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:10:52 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:11:23 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:11:53 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:12:23 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:12:53 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:13:23 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:13:53 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:14:05 Server-Archivos dhclient[846]: DHCPREQUEST of 10.200.0.174 on enp0s25 to 10.200.0.1 port 67
Feb 10 09:14:05 Server-Archivos dhclient[846]: DHCPACK of 10.200.0.174 from 10.200.0.1
Feb 10 09:14:05 Server-Archivos systemd[1]: Reloading Samba SMB Daemon.
Feb 10 09:14:05 Server-Archivos systemd[1]: Reloaded Samba SMB Daemon.
Feb 10 09:14:05 Server-Archivos dhclient[846]: bound to 10.200.0.174 -- renewal in 278 seconds.
Feb 10 09:14:23 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:14:53 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:15:01 Server-Archivos CRON[1850]: (root) CMD (/usr/sbin/omv-mkrrdgraph >/dev/null 2>&1)
Feb 10 09:15:23 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:15:53 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:16:24 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:16:54 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:17:01 Server-Archivos CRON[1960]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Feb 10 09:17:01 Server-Archivos postfix/postsuper[1963]: fatal: scan_dir_push: open directory hold: No such file or directory
Feb 10 09:17:24 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:17:54 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:18:24 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:18:44 Server-Archivos dhclient[846]: DHCPREQUEST of 10.200.0.174 on enp0s25 to 10.200.0.1 port 67
Feb 10 09:18:44 Server-Archivos dhclient[846]: DHCPACK of 10.200.0.174 from 10.200.0.1
Feb 10 09:18:44 Server-Archivos systemd[1]: Reloading Samba SMB Daemon.
Feb 10 09:18:44 Server-Archivos systemd[1]: Reloaded Samba SMB Daemon.
Feb 10 09:18:44 Server-Archivos dhclient[846]: bound to 10.200.0.174 -- renewal in 286 seconds.
Feb 10 09:18:54 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:19:24 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:19:54 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:20:24 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:20:54 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:21:25 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:21:55 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:22:25 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:22:55 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:23:25 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:23:30 Server-Archivos dhclient[846]: DHCPREQUEST of 10.200.0.174 on enp0s25 to 10.200.0.1 port 67
Feb 10 09:23:30 Server-Archivos dhclient[846]: DHCPACK of 10.200.0.174 from 10.200.0.1
Feb 10 09:23:30 Server-Archivos systemd[1]: Reloading Samba SMB Daemon.
Feb 10 09:23:30 Server-Archivos systemd[1]: Reloaded Samba SMB Daemon.
Feb 10 09:23:30 Server-Archivos dhclient[846]: bound to 10.200.0.174 -- renewal in 262 seconds.
Feb 10 09:23:55 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:24:25 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:24:50 Server-Archivos kernel: [ 2340.462817] md: kicking non-fresh sda from array!
Feb 10 09:24:50 Server-Archivos kernel: [ 2340.482528] md/raid:md127: not clean -- starting background reconstruction
Feb 10 09:24:50 Server-Archivos kernel: [ 2340.482559] md/raid:md127: device sdb operational as raid disk 1
Feb 10 09:24:50 Server-Archivos kernel: [ 2340.482561] md/raid:md127: device sdc operational as raid disk 2
Feb 10 09:24:50 Server-Archivos kernel: [ 2340.483245] md/raid:md127: cannot start dirty degraded array.
Feb 10 09:24:50 Server-Archivos kernel: [ 2340.505492] md/raid:md127: failed to run raid set.
Feb 10 09:24:50 Server-Archivos kernel: [ 2340.505493] md: pers->run() failed ...
Feb 10 09:24:55 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:25:25 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:25:55 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:26:26 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:26:56 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:27:26 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:27:52 Server-Archivos dhclient[846]: DHCPREQUEST of 10.200.0.174 on enp0s25 to 10.200.0.1 port 67
Feb 10 09:27:52 Server-Archivos dhclient[846]: DHCPACK of 10.200.0.174 from 10.200.0.1
Feb 10 09:27:52 Server-Archivos systemd[1]: Reloading Samba SMB Daemon.
Feb 10 09:27:52 Server-Archivos systemd[1]: Reloaded Samba SMB Daemon.
Feb 10 09:27:52 Server-Archivos dhclient[846]: bound to 10.200.0.174 -- renewal in 288 seconds.
Feb 10 09:27:56 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:28:26 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:28:56 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:29:26 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:29:56 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:30:01 Server-Archivos CRON[2147]: (root) CMD (/usr/sbin/omv-mkrrdgraph >/dev/null 2>&1)
Feb 10 09:30:26 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:30:56 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:31:27 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:31:57 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:32:27 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:32:40 Server-Archivos dhclient[846]: DHCPREQUEST of 10.200.0.174 on enp0s25 to 10.200.0.1 port 67
Feb 10 09:32:40 Server-Archivos dhclient[846]: DHCPACK of 10.200.0.174 from 10.200.0.1
Feb 10 09:32:40 Server-Archivos systemd[1]: Reloading Samba SMB Daemon.
Feb 10 09:32:40 Server-Archivos systemd[1]: Reloaded Samba SMB Daemon.
Feb 10 09:32:40 Server-Archivos dhclient[846]: bound to 10.200.0.174 -- renewal in 239 seconds.
Feb 10 09:32:57 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:33:27 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:33:57 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:34:27 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:34:57 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:35:27 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:35:57 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:36:28 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:36:39 Server-Archivos dhclient[846]: DHCPREQUEST of 10.200.0.174 on enp0s25 to 10.200.0.1 port 67
Feb 10 09:36:39 Server-Archivos dhclient[846]: DHCPACK of 10.200.0.174 from 10.200.0.1
Feb 10 09:36:39 Server-Archivos systemd[1]: Reloading Samba SMB Daemon.
Feb 10 09:36:39 Server-Archivos systemd[1]: Reloaded Samba SMB Daemon.
Feb 10 09:36:39 Server-Archivos dhclient[846]: bound to 10.200.0.174 -- renewal in 261 seconds.
Feb 10 09:36:58 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:37:28 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:37:58 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:38:28 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:38:58 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:39:01 Server-Archivos CRON[2355]: (root) CMD (  [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean; fi)
Feb 10 09:39:02 Server-Archivos systemd[1]: Starting Clean php session files...
Feb 10 09:39:02 Server-Archivos systemd[1]: Started Clean php session files.
Feb 10 09:39:28 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:39:58 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:40:28 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:40:32 Server-Archivos systemd[1]: dev-disk-by\x2dlabel-Raid5.device: Job dev-disk-by\x2dlabel-Raid5.device/start timed out.
Feb 10 09:40:32 Server-Archivos systemd[1]: Timed out waiting for device dev-disk-by\x2dlabel-Raid5.device.
Feb 10 09:40:32 Server-Archivos systemd[1]: Dependency failed for /srv/dev-disk-by-label-Raid5.
Feb 10 09:40:32 Server-Archivos systemd[1]: srv-dev\x2ddisk\x2dby\x2dlabel\x2dRaid5.mount: Job srv-dev\x2ddisk\x2dby\x2dlabel\x2dRaid5.mount/start failed with result 'dependency'.
Feb 10 09:40:32 Server-Archivos systemd[1]: Dependency failed for File System Check on /dev/disk/by-label/Raid5.
Feb 10 09:40:32 Server-Archivos systemd[1]: systemd-fsck@dev-disk-by\x2dlabel-Raid5.service: Job systemd-fsck@dev-disk-by\x2dlabel-Raid5.service/start failed with result 'dependency'.
Feb 10 09:40:32 Server-Archivos systemd[1]: dev-disk-by\x2dlabel-Raid5.device: Job dev-disk-by\x2dlabel-Raid5.device/start failed with result 'timeout'.
Feb 10 09:40:59 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:41:00 Server-Archivos dhclient[846]: DHCPREQUEST of 10.200.0.174 on enp0s25 to 10.200.0.1 port 67
Feb 10 09:41:00 Server-Archivos dhclient[846]: DHCPACK of 10.200.0.174 from 10.200.0.1
Feb 10 09:41:00 Server-Archivos systemd[1]: Reloading Samba SMB Daemon.
Feb 10 09:41:00 Server-Archivos systemd[1]: Reloaded Samba SMB Daemon.
Feb 10 09:41:00 Server-Archivos dhclient[846]: bound to 10.200.0.174 -- renewal in 259 seconds.
Feb 10 09:41:20 Server-Archivos kernel: [ 3330.950567] md: md127 stopped.
Feb 10 09:41:29 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:41:59 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:42:29 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:42:59 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:43:29 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:43:59 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:44:29 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:44:59 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:45:01 Server-Archivos CRON[2495]: (root) CMD (/usr/sbin/omv-mkrrdgraph >/dev/null 2>&1)
Feb 10 09:45:20 Server-Archivos dhclient[846]: DHCPREQUEST of 10.200.0.174 on enp0s25 to 10.200.0.1 port 67
Feb 10 09:45:20 Server-Archivos dhclient[846]: DHCPACK of 10.200.0.174 from 10.200.0.1
Feb 10 09:45:20 Server-Archivos systemd[1]: Reloading Samba SMB Daemon.
Feb 10 09:45:20 Server-Archivos systemd[1]: Reloaded Samba SMB Daemon.
Feb 10 09:45:20 Server-Archivos dhclient[846]: bound to 10.200.0.174 -- renewal in 236 seconds.
Feb 10 09:45:30 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:46:00 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:46:30 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:46:57 Server-Archivos kernel: [ 3667.423878] md: md0 stopped.
Feb 10 09:46:57 Server-Archivos kernel: [ 3667.425907] md: md0 stopped.
Feb 10 09:47:00 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:47:30 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:47:43 Server-Archivos rrdcached[1116]: flushing old values
Feb 10 09:47:43 Server-Archivos rrdcached[1116]: rotating journals
Feb 10 09:47:43 Server-Archivos rrdcached[1116]: started new journal /var/lib/rrdcached/journal/rrd.journal.1612961263.326240
Feb 10 09:48:00 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:48:30 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:49:00 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:49:16 Server-Archivos dhclient[846]: DHCPREQUEST of 10.200.0.174 on enp0s25 to 10.200.0.1 port 67
Feb 10 09:49:16 Server-Archivos dhclient[846]: DHCPACK of 10.200.0.174 from 10.200.0.1
Feb 10 09:49:16 Server-Archivos systemd[1]: Reloading Samba SMB Daemon.
Feb 10 09:49:16 Server-Archivos systemd[1]: Reloaded Samba SMB Daemon.
Feb 10 09:49:16 Server-Archivos dhclient[846]: bound to 10.200.0.174 -- renewal in 241 seconds.
Feb 10 09:49:30 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:50:00 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:50:31 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:51:01 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:51:31 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:52:01 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:52:31 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:52:53 Server-Archivos cron-apt: CRON-APT RUN [/etc/cron-apt/config]: Wed Feb 10 08:52:33 -03 2021
Feb 10 09:52:53 Server-Archivos cron-apt: CRON-APT SLEEP: 3563, Wed Feb 10 09:51:56 -03 2021
Feb 10 09:52:53 Server-Archivos cron-apt: CRON-APT ACTION: 0-update
Feb 10 09:52:53 Server-Archivos cron-apt: CRON-APT LINE: /usr/bin/apt-get -o Acquire::http::Dl-Limit=25 update -o quiet=2
Feb 10 09:52:54 Server-Archivos cron-apt: CRON-APT ACTION: 3-download
Feb 10 09:52:54 Server-Archivos cron-apt: CRON-APT LINE: /usr/bin/apt-get -o Acquire::http::Dl-Limit=25 autoclean -y
Feb 10 09:52:54 Server-Archivos cron-apt: Reading package lists...
Feb 10 09:52:54 Server-Archivos cron-apt: Building dependency tree...
Feb 10 09:52:54 Server-Archivos cron-apt: Reading state information...
Feb 10 09:52:54 Server-Archivos cron-apt: CRON-APT LINE: /usr/bin/apt-get -o Acquire::http::Dl-Limit=25 dist-upgrade -d -y -o APT::Get::Show-Upgraded=true
Feb 10 09:52:54 Server-Archivos cron-apt: Reading package lists...
Feb 10 09:52:54 Server-Archivos cron-apt: Building dependency tree...
Feb 10 09:52:54 Server-Archivos cron-apt: Reading state information...
Feb 10 09:52:54 Server-Archivos cron-apt: Calculating upgrade...
Feb 10 09:52:54 Server-Archivos cron-apt: The following NEW packages will be installed:
Feb 10 09:52:54 Server-Archivos cron-apt:   linux-image-4.19.0-0.bpo.9-amd64
Feb 10 09:52:54 Server-Archivos cron-apt: The following packages will be upgraded:
Feb 10 09:52:54 Server-Archivos cron-apt:   apt apt-transport-https apt-utils base-files bind9-host ca-certificates dbus
Feb 10 09:52:54 Server-Archivos cron-apt:   firmware-amd-graphics firmware-atheros firmware-bnx2 firmware-bnx2x
Feb 10 09:52:54 Server-Archivos cron-apt:   firmware-brcm80211 firmware-cavium firmware-intel-sound firmware-intelwimax
Feb 10 09:52:54 Server-Archivos cron-apt:   firmware-ipw2x00 firmware-ivtv firmware-iwlwifi firmware-libertas
Feb 10 09:52:54 Server-Archivos cron-apt:   firmware-linux firmware-linux-nonfree firmware-misc-nonfree firmware-myricom
Feb 10 09:52:54 Server-Archivos cron-apt:   firmware-netxen firmware-qlogic firmware-realtek firmware-samsung
Feb 10 09:52:54 Server-Archivos cron-apt:   firmware-siano firmware-ti-connectivity gdisk gettext-base grub-common
Feb 10 09:52:54 Server-Archivos cron-apt:   grub-efi-amd64 grub-efi-amd64-bin grub2-common intel-microcode
Feb 10 09:52:54 Server-Archivos cron-apt:   libapt-inst2.0 libapt-pkg5.0 libbind9-140 libcairo2 libcups2 libcurl3-gnutls
Feb 10 09:52:54 Server-Archivos cron-apt:   libdbi1 libdbus-1-3 libdns-export162 libdns162 libexpat1 libfreetype6 libgd3
Feb 10 09:52:54 Server-Archivos cron-apt:   libglib2.0-0 libgnutls30 libgssapi-krb5-2 libicu57 libidn11 libisc-export160
Feb 10 09:52:54 Server-Archivos cron-apt:   libisc160 libisccc140 libisccfg140 libjpeg62-turbo libk5crypto3 libkrb5-3
Feb 10 09:52:54 Server-Archivos cron-apt:   libkrb5support0 libldap-2.4-2 libldap-common liblwres141 libmagic-mgc
Feb 10 09:52:54 Server-Archivos cron-apt:   libmagic1 libnghttp2-14 libnginx-mod-http-auth-pam libnginx-mod-http-dav-ext
Feb 10 09:52:54 Server-Archivos cron-apt:   libnginx-mod-http-echo libnginx-mod-http-geoip
Feb 10 09:52:54 Server-Archivos cron-apt:   libnginx-mod-http-image-filter libnginx-mod-http-subs-filter
Feb 10 09:52:54 Server-Archivos cron-apt:   libnginx-mod-http-upstream-fair libnginx-mod-http-xslt-filter
Feb 10 09:52:54 Server-Archivos cron-apt:   libnginx-mod-mail libnginx-mod-stream libonig4 libp11-kit0 libperl5.24
Feb 10 09:52:54 Server-Archivos cron-apt:   libpython2.7 libpython2.7-minimal libpython2.7-stdlib libpython3.5-minimal
Feb 10 09:52:54 Server-Archivos cron-apt:   libpython3.5-stdlib libsasl2-2 libsasl2-modules libsasl2-modules-db
Feb 10 09:52:54 Server-Archivos cron-apt:   libsqlite3-0 libss2 libssl1.0.2 libssl1.1 libsystemd0 libtiff5 libudev1
Feb 10 09:52:54 Server-Archivos cron-apt:   libwbclient0 libx11-6 libx11-data libxml2 libxslt1.1 linux-image-amd64 monit
Feb 10 09:52:54 Server-Archivos cron-apt:   nfs-common nfs-kernel-server nginx nginx-common nginx-full openmediavault
Feb 10 09:52:54 Server-Archivos cron-apt:   openssh-client openssh-server openssh-sftp-server openssl perl perl-base
Feb 10 09:52:54 Server-Archivos cron-apt:   perl-modules-5.24 php7.0-bcmath php7.0-cgi php7.0-cli php7.0-common
Feb 10 09:52:54 Server-Archivos cron-apt:   php7.0-fpm php7.0-json php7.0-mbstring php7.0-opcache php7.0-readline
Feb 10 09:52:54 Server-Archivos cron-apt:   php7.0-xml postfix postfix-sqlite proftpd-basic python-apt-common
Feb 10 09:52:54 Server-Archivos cron-apt:   python-samba python2.7 python2.7-minimal python3-apt python3-lxml python3.5
Feb 10 09:52:54 Server-Archivos cron-apt:   python3.5-minimal samba samba-common samba-common-bin samba-libs
Feb 10 09:52:54 Server-Archivos cron-apt:   samba-vfs-modules sudo systemd systemd-sysv tzdata udev usbutils
Feb 10 09:52:54 Server-Archivos cron-apt:   wpasupplicant wsdd
Feb 10 09:52:54 Server-Archivos cron-apt: 150 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Feb 10 09:52:54 Server-Archivos cron-apt: Need to get 0 B/173 MB of archives.
Feb 10 09:52:54 Server-Archivos cron-apt: After this operation, 297 MB of additional disk space will be used.
Feb 10 09:52:54 Server-Archivos cron-apt: Download complete and in download only mode
Feb 10 09:52:55 Server-Archivos postfix/postsuper[3388]: fatal: scan_dir_push: open directory hold: No such file or directory
Feb 10 09:52:56 Server-Archivos anacron[652]: Job `cron.daily' terminated (exit status: 1) (mailing output)
Feb 10 09:52:56 Server-Archivos anacron[652]: Normal exit (1 job run)
Feb 10 09:52:56 Server-Archivos systemd[1]: anacron.timer: Adding 4min 38.362909s random time.
Feb 10 09:52:56 Server-Archivos systemd[1]: Started Run anacron jobs.
Feb 10 09:52:56 Server-Archivos anacron[3416]: Anacron 2.3 started on 2021-02-10
Feb 10 09:52:56 Server-Archivos anacron[3416]: Normal exit (0 jobs run)
Feb 10 09:52:56 Server-Archivos systemd[1]: anacron.timer: Adding 1min 47.384132s random time.
Feb 10 09:53:01 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:53:17 Server-Archivos dhclient[846]: DHCPREQUEST of 10.200.0.174 on enp0s25 to 10.200.0.1 port 67
Feb 10 09:53:17 Server-Archivos dhclient[846]: DHCPACK of 10.200.0.174 from 10.200.0.1
Feb 10 09:53:17 Server-Archivos systemd[1]: Reloading Samba SMB Daemon.
Feb 10 09:53:17 Server-Archivos systemd[1]: Reloaded Samba SMB Daemon.
Feb 10 09:53:17 Server-Archivos dhclient[846]: bound to 10.200.0.174 -- renewal in 236 seconds.
Feb 10 09:53:31 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:54:01 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:54:26 Server-Archivos systemd[1]: dev-disk-by\x2dlabel-Raid5.device: Job dev-disk-by\x2dlabel-Raid5.device/start timed out.
Feb 10 09:54:26 Server-Archivos systemd[1]: Timed out waiting for device dev-disk-by\x2dlabel-Raid5.device.
Feb 10 09:54:26 Server-Archivos systemd[1]: Dependency failed for /srv/dev-disk-by-label-Raid5.
Feb 10 09:54:26 Server-Archivos systemd[1]: srv-dev\x2ddisk\x2dby\x2dlabel\x2dRaid5.mount: Job srv-dev\x2ddisk\x2dby\x2dlabel\x2dRaid5.mount/start failed with result 'dependency'.
Feb 10 09:54:26 Server-Archivos systemd[1]: Dependency failed for File System Check on /dev/disk/by-label/Raid5.
Feb 10 09:54:26 Server-Archivos systemd[1]: systemd-fsck@dev-disk-by\x2dlabel-Raid5.service: Job systemd-fsck@dev-disk-by\x2dlabel-Raid5.service/start failed with result 'dependency'.
Feb 10 09:54:26 Server-Archivos systemd[1]: dev-disk-by\x2dlabel-Raid5.device: Job dev-disk-by\x2dlabel-Raid5.device/start failed with result 'timeout'.
Feb 10 09:54:31 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:55:01 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:55:32 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:56:02 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:56:32 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:57:02 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:57:13 Server-Archivos dhclient[846]: DHCPREQUEST of 10.200.0.174 on enp0s25 to 10.200.0.1 port 67
Feb 10 09:57:13 Server-Archivos dhclient[846]: DHCPACK of 10.200.0.174 from 10.200.0.1
Feb 10 09:57:13 Server-Archivos systemd[1]: Reloading Samba SMB Daemon.
Feb 10 09:57:13 Server-Archivos systemd[1]: Reloaded Samba SMB Daemon.
Feb 10 09:57:13 Server-Archivos dhclient[846]: bound to 10.200.0.174 -- renewal in 231 seconds.
Feb 10 09:57:32 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:58:02 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:58:32 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:59:02 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 09:59:32 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:00:01 Server-Archivos CRON[3549]: (root) CMD (/usr/sbin/omv-mkrrdgraph >/dev/null 2>&1)
Feb 10 10:00:02 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:00:33 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:01:03 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:01:04 Server-Archivos dhclient[846]: DHCPREQUEST of 10.200.0.174 on enp0s25 to 10.200.0.1 port 67
Feb 10 10:01:04 Server-Archivos dhclient[846]: DHCPACK of 10.200.0.174 from 10.200.0.1
Feb 10 10:01:04 Server-Archivos systemd[1]: Reloading Samba SMB Daemon.
Feb 10 10:01:04 Server-Archivos systemd[1]: Reloaded Samba SMB Daemon.
Feb 10 10:01:04 Server-Archivos dhclient[846]: bound to 10.200.0.174 -- renewal in 245 seconds.
Feb 10 10:01:33 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:01:52 Server-Archivos systemd[1]: Started Run anacron jobs.
Feb 10 10:01:52 Server-Archivos anacron[3676]: Anacron 2.3 started on 2021-02-10
Feb 10 10:01:52 Server-Archivos anacron[3676]: Normal exit (0 jobs run)
Feb 10 10:01:52 Server-Archivos systemd[1]: anacron.timer: Adding 4min 49.931943s random time.
Feb 10 10:02:03 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:02:33 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:03:03 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:03:22 Server-Archivos systemd[1]: dev-disk-by\x2dlabel-Raid5.device: Job dev-disk-by\x2dlabel-Raid5.device/start timed out.
Feb 10 10:03:22 Server-Archivos systemd[1]: Timed out waiting for device dev-disk-by\x2dlabel-Raid5.device.
Feb 10 10:03:22 Server-Archivos systemd[1]: Dependency failed for /srv/dev-disk-by-label-Raid5.
Feb 10 10:03:22 Server-Archivos systemd[1]: srv-dev\x2ddisk\x2dby\x2dlabel\x2dRaid5.mount: Job srv-dev\x2ddisk\x2dby\x2dlabel\x2dRaid5.mount/start failed with result 'dependency'.
Feb 10 10:03:22 Server-Archivos systemd[1]: Dependency failed for File System Check on /dev/disk/by-label/Raid5.
Feb 10 10:03:22 Server-Archivos systemd[1]: systemd-fsck@dev-disk-by\x2dlabel-Raid5.service: Job systemd-fsck@dev-disk-by\x2dlabel-Raid5.service/start failed with result 'dependency'.
Feb 10 10:03:22 Server-Archivos systemd[1]: dev-disk-by\x2dlabel-Raid5.device: Job dev-disk-by\x2dlabel-Raid5.device/start failed with result 'timeout'.
Feb 10 10:03:33 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:04:03 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:04:33 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:05:03 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:05:10 Server-Archivos dhclient[846]: DHCPREQUEST of 10.200.0.174 on enp0s25 to 10.200.0.1 port 67
Feb 10 10:05:10 Server-Archivos dhclient[846]: DHCPACK of 10.200.0.174 from 10.200.0.1
Feb 10 10:05:10 Server-Archivos systemd[1]: Reloading Samba SMB Daemon.
Feb 10 10:05:10 Server-Archivos systemd[1]: Reloaded Samba SMB Daemon.
Feb 10 10:05:10 Server-Archivos dhclient[846]: bound to 10.200.0.174 -- renewal in 276 seconds.
Feb 10 10:05:34 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:06:04 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:06:34 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:07:04 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:07:34 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:08:04 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:08:34 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:09:01 Server-Archivos CRON[3760]: (root) CMD (  [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean; fi)
Feb 10 10:09:02 Server-Archivos systemd[1]: Starting Clean php session files...
Feb 10 10:09:02 Server-Archivos systemd[1]: Started Clean php session files.
Feb 10 10:09:04 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:09:34 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:09:46 Server-Archivos dhclient[846]: DHCPREQUEST of 10.200.0.174 on enp0s25 to 10.200.0.1 port 67
Feb 10 10:09:46 Server-Archivos dhclient[846]: DHCPACK of 10.200.0.174 from 10.200.0.1
Feb 10 10:09:46 Server-Archivos systemd[1]: Reloading Samba SMB Daemon.
Feb 10 10:09:46 Server-Archivos systemd[1]: Reloaded Samba SMB Daemon.
Feb 10 10:09:46 Server-Archivos dhclient[846]: bound to 10.200.0.174 -- renewal in 272 seconds.
Feb 10 10:10:04 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:10:32 Server-Archivos systemd[1]: dev-disk-by\x2dlabel-Raid5.device: Job dev-disk-by\x2dlabel-Raid5.device/start timed out.
Feb 10 10:10:32 Server-Archivos systemd[1]: Timed out waiting for device dev-disk-by\x2dlabel-Raid5.device.
Feb 10 10:10:32 Server-Archivos systemd[1]: Dependency failed for /srv/dev-disk-by-label-Raid5.
Feb 10 10:10:32 Server-Archivos systemd[1]: srv-dev\x2ddisk\x2dby\x2dlabel\x2dRaid5.mount: Job srv-dev\x2ddisk\x2dby\x2dlabel\x2dRaid5.mount/start failed with result 'dependency'.
Feb 10 10:10:32 Server-Archivos systemd[1]: Dependency failed for File System Check on /dev/disk/by-label/Raid5.
Feb 10 10:10:32 Server-Archivos systemd[1]: systemd-fsck@dev-disk-by\x2dlabel-Raid5.service: Job systemd-fsck@dev-disk-by\x2dlabel-Raid5.service/start failed with result 'dependency'.
Feb 10 10:10:32 Server-Archivos systemd[1]: dev-disk-by\x2dlabel-Raid5.device: Job dev-disk-by\x2dlabel-Raid5.device/start failed with result 'timeout'.
Feb 10 10:10:35 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:11:05 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:11:35 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:12:05 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:12:35 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:13:05 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:13:35 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:14:05 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:14:19 Server-Archivos dhclient[846]: DHCPREQUEST of 10.200.0.174 on enp0s25 to 10.200.0.1 port 67
Feb 10 10:14:19 Server-Archivos dhclient[846]: DHCPACK of 10.200.0.174 from 10.200.0.1
Feb 10 10:14:19 Server-Archivos systemd[1]: Reloading Samba SMB Daemon.
Feb 10 10:14:19 Server-Archivos systemd[1]: Reloaded Samba SMB Daemon.
Feb 10 10:14:19 Server-Archivos dhclient[846]: bound to 10.200.0.174 -- renewal in 245 seconds.
Feb 10 10:14:35 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:15:01 Server-Archivos CRON[4075]: (root) CMD (/usr/sbin/omv-mkrrdgraph >/dev/null 2>&1)
Feb 10 10:15:05 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:15:36 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:16:06 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:16:36 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:17:01 Server-Archivos CRON[4184]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Feb 10 10:17:01 Server-Archivos postfix/postsuper[4187]: fatal: scan_dir_push: open directory hold: No such file or directory
Feb 10 10:17:06 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:17:36 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:18:06 Server-Archivos monit[1115]: 'mountpoint_srv_dev-disk-by-label-Raid5' status failed (1) -- /srv/dev-disk-by-label-Raid5 is not a mountpoint
Feb 10 10:18:24 Server-Archivos dhclient[846]: DHCPREQUEST of 10.200.0.174 on enp0s25 to 10.200.0.1 port 67
Feb 10 10:18:24 Server-Archivos dhclient[846]: DHCPACK of 10.200.0.174 from 10.200.0.1
Feb 10 10:18:24 Server-Archivos systemd[1]: Reloading Samba SMB Daemon.
Feb 10 10:18:24 Server-Archivos systemd[1]: Reloaded Samba SMB Daemon.
Feb 10 10:18:24 Server-Archivos dhclient[846]: bound to 10.200.0.174 -- renewal in 284 seconds.

 

 

Edited by aprayoga
move long log to spoiler
Link to comment
Share on other sites

@brunof You should first stop the array and then reassemble.

 

mdadm --stop /dev/md0

 

then try to re assemble without --force option

 

mdadm /dev/md0 --assemble /dev/sd[abcde]

 

If if fails, please show the output. Next step, if previous one failed, it's to try assemble with --force option.  Please take note that if --force is used you might end up with few file corrupted (whatever writing operation that happen in the difference of event count, in your case only /dev/sda is out of sync, so you shouldn't lose anything).

 

mdadm /dev/md0 --assemble --force /dev/sd[abcde]

 

Finally can you share the link output from command armbianmonitor -u

Link to comment
Share on other sites

Hello gprovost and brunof,

I have the same problem today, but at the end, I get:


assembled from 2 drives - not enough to start the array

(it is an helios 64 with 5 drives) here are the transcripts:
Than You very much for this post and the forum!!
Best regards, David

 

lsblk
NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda            8:0    0 12.8T  0 disk
sdb            8:16   0 12.8T  0 disk
sdc            8:32   0 12.8T  0 disk
sdd            8:48   0 12.8T  0 disk
sde            8:64   0 12.8T  0 disk
mmcblk2      179:0    0 14.6G  0 disk
└─mmcblk2p1  179:1    0 14.4G  0 part /
mmcblk2boot0 179:32   0    4M  1 disk
mmcblk2boot1 179:64   0    4M  1 disk

cat /proc/mdstat
cat/proc/mdstat
-bash: cat/proc/mdstat: No such file or directory

cat/etc/mdadm/mdadm.conf
-bash: cat/etc/mdadm/mdadm.conf: No such file or directory
 
cat/etc/fstab
-bash: cat/etc/fstab: No such file or directory

mdadm -D /dev/md0
mdadm: cannot open /dev/md0: No such file or directory

cat /etc/fstab
UUID=a7a41236-bd7e-4b26-a31d-e30f47633de7               /       ext4    noatime,nodiratime,defaults,commit=60                                            0,errors=remount-ro     0 1
tmpfs /tmp tmpfs defaults,nosuid 0 0
# >>> [openmediavault]
/dev/disk/by-uuid/9a6523a1-ad50-4f47-ad34-40969412fb14          /srv/dev-disk-by-uuid-9a6523a1-ad50-4f47-ad34                                            -40969412fb14   ext4    defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,a                                            cl      0 2
/dev/disk/by-uuid/7ab05eb1-2cfd-4e97-a58c-11dfa0c345b1          /srv/dev-disk-by-uuid-7ab05eb1-2cfd-4e97-a58c                                            -11dfa0c345b1   ext4    defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,a                                            cl      0 2
/dev/disk/by-uuid/da21c77c-d9b0-438a-ae92-b3e5b69d43ea          /srv/dev-disk-by-uuid-da21c77c-d9b0-438a-ae92                                            -b3e5b69d43ea   ext4    defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,a                                            cl      0 2
/dev/disk/by-uuid/8a02774d-b5ae-48d8-acdf-21650a592e95          /srv/dev-disk-by-uuid-8a02774d-b5ae-48d8-acdf                                            -21650a592e95   ext4    defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,a                                            cl      0 2
//192.168.178.32/share1         /srv/009c112d-24cc-4cd2-80a8-1a92225c0da7       cifs    _netdev,iocharset=utf                                            8,vers=3.0,nofail,credentials=/root/.cifscredentials-c3c94089-2022-4974-afba-c886c545ac64       0 0
# <<< [openmediavault]

cat /etc/mdadm/mdadm.conf
# This file is auto-generated by openmediavault (https://www.openmediavault.org)
# WARNING: Do not edit this file, your changes will get lost.

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR david@mayenfisch.com
MAILFROM root

# definitions of existing MD arrays
ARRAY /dev/md/helios64:Meduraid metadata=1.2 name=helios64:Meduraid UUID=e8fadf3f:06489a06:20c58fb5:f3e10151

cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : inactive sde[4](S) sdc[2](S) sdd[3](S) sdb[1](S) sda[0](S)
      68361251840 blocks super 1.2

unused devices: <none>


mdadm --stop /dev/md127
mdadm: stopped /dev/md127

root@helios64:~# mdadm /dev/md127 --assemble /dev/sd[abcde]    #debut demmarage disques 12:29 censé syncher

mdadm /dev/md127 --assemble /dev/sd[abcde]
mdadm: /dev/md127 assembled from 2 drives - not enough to start the array.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines