Jump to content

NotAnExpert

Members
  • Posts

    5
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thanks! Ran the commands, the non-forced assembly failed but the forced fixed the issue. I have been checking files and folders and have not found anything wrong yet. Appreciate your help and detailed instructions! mdadm --stop /dev/md0 mdadm: stopped /dev/md0 mdadm /dev/md0 --assemble /dev/sd[abcde] mdadm: /dev/md0 assembled from 2 drives - not enough to start the array mdadm /dev/md0 --assemble --force /dev/sd[abcde] mdadm: forcing event count in /dev/sdb(1) from 26341 upto 26352 mdadm: forcing event count in /dev/sdd(3) from 26341 upto 26352 mdadm: forcing event count in /dev/sde(4) from 26341 upto 26352 mdadm: clearing FAULTY flag for device 3 in /dev/md0 for /dev/sdd mdadm: clearing FAULTY flag for device 4 in /dev/md0 for /dev/sde mdadm: Marking array /dev/md0 as 'clean' mdadm: /dev/md0 has been started with 5 drives. cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active (auto-read-only) raid6 sda[0] sde[4] sdd[3] sdc[2] sdb[1] 35156259840 blocks super 1.2 level 6, 512k chunk, algorithm 2 [5/5] [UUUUU] bitmap: 0/88 pages [0KB], 65536KB chunk unused devices: <none> mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Thu Nov 5 22:00:27 2020 Raid Level : raid6 Array Size : 35156259840 (33527.62 GiB 36000.01 GB) Used Dev Size : 11718753280 (11175.87 GiB 12000.00 GB) Raid Devices : 5 Total Devices : 5 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Thu Nov 26 07:13:29 2020 State : clean Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Consistency Policy : bitmap Name : helios64:0 (local to host helios64) UUID : f2188013:e3cdc6dd:c8a55f0d:d0e9c602 Events : 26352 Number Major Minor RaidDevice State 0 8 0 0 active sync /dev/sda 1 8 16 1 active sync /dev/sdb 2 8 32 2 active sync /dev/sdc 3 8 48 3 active sync /dev/sdd 4 8 64 4 active sync /dev/sde ============================================================= After running commands I went back to OMV web UI and raid was listed with clean state. Under filesystem I selected the raid6 “data” and mounted it. Added shares to Samba and all data is accessible.
  2. This is the output: mdadm --examine /dev/sd[abcde] /dev/sda: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : f2188013:e3cdc6dd:c8a55f0d:d0e9c602 Name : helios64:0 (local to host helios64) Creation Time : Thu Nov 5 22:00:27 2020 Raid Level : raid6 Raid Devices : 5 Avail Dev Size : 23437506560 (11175.87 GiB 12000.00 GB) Array Size : 35156259840 (33527.62 GiB 36000.01 GB) Data Offset : 264192 sectors Super Offset : 8 sectors Unused Space : before=264112 sectors, after=0 sectors State : clean Device UUID : ae15b2cf:d55f6fcf:1e4ef3c5:1824b2c7 Internal Bitmap : 8 sectors from superblock Update Time : Thu Nov 26 07:13:29 2020 Bad Block Log : 512 entries available at offset 56 sectors Checksum : 1e6baa82 - correct Events : 26352 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 0 Array State : AAA.. ('A' == active, '.' == missing, 'R' == replacing) /dev/sdb: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : f2188013:e3cdc6dd:c8a55f0d:d0e9c602 Name : helios64:0 (local to host helios64) Creation Time : Thu Nov 5 22:00:27 2020 Raid Level : raid6 Raid Devices : 5 Avail Dev Size : 23437506560 (11175.87 GiB 12000.00 GB) Array Size : 35156259840 (33527.62 GiB 36000.01 GB) Data Offset : 264192 sectors Super Offset : 8 sectors Unused Space : before=264112 sectors, after=0 sectors State : clean Device UUID : e6c4d356:a1021369:2fcd3661:deaa4406 Internal Bitmap : 8 sectors from superblock Update Time : Wed Nov 25 23:33:03 2020 Bad Block Log : 512 entries available at offset 56 sectors Checksum : 1909966b - correct Events : 26341 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 1 Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdc: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : f2188013:e3cdc6dd:c8a55f0d:d0e9c602 Name : helios64:0 (local to host helios64) Creation Time : Thu Nov 5 22:00:27 2020 Raid Level : raid6 Raid Devices : 5 Avail Dev Size : 23437506560 (11175.87 GiB 12000.00 GB) Array Size : 35156259840 (33527.62 GiB 36000.01 GB) Data Offset : 264192 sectors Super Offset : 8 sectors Unused Space : before=264112 sectors, after=0 sectors State : clean Device UUID : 31f197dd:6f6d0545:fbc627a6:53582fd4 Internal Bitmap : 8 sectors from superblock Update Time : Thu Nov 26 07:13:29 2020 Bad Block Log : 512 entries available at offset 56 sectors Checksum : 8e9940b8 - correct Events : 26352 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 2 Array State : AAA.. ('A' == active, '.' == missing, 'R' == replacing) /dev/sdd: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : f2188013:e3cdc6dd:c8a55f0d:d0e9c602 Name : helios64:0 (local to host helios64) Creation Time : Thu Nov 5 22:00:27 2020 Raid Level : raid6 Raid Devices : 5 Avail Dev Size : 23437506560 (11175.87 GiB 12000.00 GB) Array Size : 35156259840 (33527.62 GiB 36000.01 GB) Data Offset : 264192 sectors Super Offset : 8 sectors Unused Space : before=264112 sectors, after=0 sectors State : clean Device UUID : 3296b043:2025da1d:edc6e9eb:99e57e34 Internal Bitmap : 8 sectors from superblock Update Time : Wed Nov 25 23:33:03 2020 Bad Block Log : 512 entries available at offset 56 sectors Checksum : 739abeb1 - correct Events : 26341 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 3 Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing) /dev/sde: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : f2188013:e3cdc6dd:c8a55f0d:d0e9c602 Name : helios64:0 (local to host helios64) Creation Time : Thu Nov 5 22:00:27 2020 Raid Level : raid6 Raid Devices : 5 Avail Dev Size : 23437506560 (11175.87 GiB 12000.00 GB) Array Size : 35156259840 (33527.62 GiB 36000.01 GB) Data Offset : 264192 sectors Super Offset : 8 sectors Unused Space : before=264112 sectors, after=0 sectors State : clean Device UUID : ecf702ec:8dec71e2:a7f465fa:86b78694 Internal Bitmap : 8 sectors from superblock Update Time : Wed Nov 25 23:33:03 2020 Bad Block Log : 512 entries available at offset 56 sectors Checksum : 4f08e782 - correct Events : 26341 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 4 Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing) ======= Regarding what I tried When I noticed I couldn’t access the files and folders on my computer I tried other computers. Then I went to OMV front end and changed the workgroup thinking it was just a Windows issue and when that didn’t work I removed the shares from Samba, did not delete them from Access Right Management > Shared Folders. At that point I realized the filesystem was missing and mounting was disabled. I did nothing else either in OPV web interface or through SSH commands except for the info and commands you have requested. Thank you!
  3. Thanks! I tried and got the following: mdadm --run /dev/md0 mdadm: failed to start array /dev/md/helios64:0: Input/output error mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Thu Nov 5 22:00:27 2020 Raid Level : raid6 Used Dev Size : 18446744073709551615 Raid Devices : 5 Total Devices : 2 Persistence : Superblock is persistent Update Time : Thu Nov 26 07:13:29 2020 State : active, FAILED, Not Started Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Consistency Policy : unknown Name : helios64:0 (local to host helios64) UUID : f2188013:e3cdc6dd:c8a55f0d:d0e9c602 Events : 26352 Number Major Minor RaidDevice State - 0 0 0 removed - 0 0 1 removed - 0 0 2 removed - 0 0 3 removed - 0 0 4 removed - 8 32 2 sync /dev/sdc - 8 0 0 sync /dev/sda cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : inactive sdc[2] sda[0] 23437506560 blocks super 1.2 unused devices: <none>
  4. mdadm -D /dev/md0 /dev/md0: Version : 1.2 Raid Level : raid0 Total Devices : 5 Persistence : Superblock is persistent State : inactive Working Devices : 5 Name : helios64:0 (local to host helios64) UUID : f2188013:e3cdc6dd:c8a55f0d:d0e9c602 Events : 26341 Number Major Minor RaidDevice - 8 64 - /dev/sde - 8 32 - /dev/sdc - 8 0 - /dev/sda - 8 48 - /dev/sdd - 8 16 - /dev/sdb armbianmonitor -u http://ix.io/2FKq Got a UPS for it in the meantime.
  5. Last night we lost power to the whole house, I had to manually start the NAS and when I tried to access my data there was nothing but empty shares. After checking OpenMediaVault Web UI and checking around, the file system status shows as "missing" and there is nothing under RAID Management and all the disks show properly under Storage > Disks. The NAS is set up as a Raid 6 on ext4 with five 12 TB drives. For starters, I wonder why the internal battery did not graciously shut down Helios64 as to avoid this from happening given that power outages are a rather common issue. Then, how do I get my file system back? After browsing some forums I noticed this info may seem important: lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 10.9T 0 disk sdb 8:16 0 10.9T 0 disk sdc 8:32 0 10.9T 0 disk sdd 8:48 0 10.9T 0 disk sde 8:64 0 10.9T 0 disk mmcblk1 179:0 0 14.6G 0 disk └─mmcblk1p1 179:1 0 14.4G 0 part / mmcblk1boot0 179:32 0 4M 1 disk mmcblk1boot1 179:64 0 4M 1 disk cat /etc/fstab UUID=dabb3dbf-8631-4051-a032-c0b97eb285bd / ext4 defaults,noatime,nodiratime,commit=600,errors=remount-ro 0 1 tmpfs /tmp tmpfs defaults,nosuid 0 0 # >>> [openmediavault] /dev/disk/by-label/data /srv/dev-disk-by-label-data ext4 defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl0 2 # <<< [openmediavault] cat /etc/mdadm/mdadm.conf # This file is auto-generated by openmediavault (https://www.openmediavault.org) # WARNING: Do not edit this file, your changes will get lost. # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is # used if no RAID devices are configured. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts # definitions of existing MD arrays ARRAY /dev/md/helios64:0 metadata=1.2 name=helios64:0 UUID=f2188013:e3cdc6dd:c8a55f0d:d0e9c602 cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : inactive sde[4](S) sdc[2](S) sda[0](S) sdd[3](S) sdb[1](S) 58593766400 blocks super 1.2 unused devices: <none> fsck /dev/disk/by-label/ fsck from util-linux 2.33.1 e2fsck 1.44.5 (15-Dec-2018) fsck.ext2: No such file or directory while trying to open /dev/disk/by-label/ Possibly non-existent device? I don't want to lose data even if I have it backed up so I don't want to try to fix it blindly. And given that this could happen again to others and myself it would be very helpful to find a recovery method that others could use.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines