Jump to content

alexcp

Members
  • Posts

    11
  • Joined

  • Last visited

Everything posted by alexcp

  1. Thank you for the link and for continuing support. I realize you cannot focus on Helios4 forever, but it is good to know there will be updates to the box that keeps my data safe.
  2. Hello, My Helios4's power brick died. What's a good replacement? I almost ordered a Mean Well GST120A-R7B but realized its Mini-DIN connector has a different pinout. Also, Helios4 is end-of-life. What does it practically mean in terms of software updates and support?
  3. Well, I have not been able to recover my data. Even though the RAID array was clean, the filesystem appeared damaged as you suspected. The curious 18Tb filesystem on a 32bit rig is no more, unfortunately; I cannot run any test on it anymore. The defective HDD was less than a year old and covered by a 3-year warranty, so it is on its way to the manufacturer; hopefully I will get a free replacement. I intend to keep the 4x 10Tb drives on my Intel rig and rebuild the Helios with smaller, cheaper HDDs. To me, the incident is a reminder that a RAID array is not a complete solution for data safety and must be supported by other means, e.g. cloud or tape backups. I don't remember how I got the 18Tb filesystem. I think I created a smaller one and then resized it up after deleting the encrypted partition, even though such resizing should be impossible according to your link above. Out of curiosity I just did the following: I assembled a RAID5 array from the remaining 3x 10Tb disks and tried to create a 18Tb filesystem on it via OMV. The result was the following error message: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; mkfs -V -t ext4 -b 4096 -m 0 -E lazy_itable_init=0,lazy_journal_init=0 -L 'public' '/dev/mapper/public-public' 2>&1' with exit code '1': mke2fs 1.43.4 (31-Jan-2017) Creating filesystem with 4883151872 4k blocks and 305197056 inodes Filesystem UUID: c731d438-7ccd-4d31-9277-c91b0ea62c72 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000 , 214990848 , 512000000 , 550731776 , 644972544 , 1934917632 , 2560000000 , 3855122432 Allocating group tables: 0/149022 12986/149022 done Writing inode tables: 0/149022 151/149022 332/149022 576/149022 792/149022 1016/149022 1230/149022 1436/149022 1673/149022 1881/149022 2044/149022 2265/149022 2427/149022 2650/149022 2839/149022 3056/149022 3265/149022 3479/149022 3692/149022 3904/149022 4099/149022 4281/149022 4473/149022 4677/149022 4903/149022 5116/149022 5316/149022 5510/149022 5731/149022 5944/149022 6124/149022 6326/149022 6532/149022 6727/149022 6911/149022 7139/149022 7363/149022 7559/149022 7762/149022 7988/149022 8165/149022 8389/149022 8542/149022 8771/149022 8956/149022 9176/149022 9412/149022 9584/149022 9821/149022 10044/149022 10229/149022 10451/149022 10604/149022 10848/149022 11007/149022 11247/149022 11458/149022 11645/149022 11863/149022 12089/149022 12222/149022 12448/149022 12599/149022 12851/149022 13067/149022 13285/149022 13498/149022 13678/149022 13910/149022 14134/149022 14310/149022 14532/149022 14720/149022 14883/149022 15105/149022 15331/149022 15529/149022 15721/149022 15938/149022 16145/149022 16331/149022 16531/149022 16700/149022 16896/149022 17112/149022 17349/149022 17556/149022 17740/149022 17961/149022 18441/149022 19142/149022 20066/149022 20833/149022 21761/149022 22691/149022 23678/149022 24603/149022 25474/149022 26450/149022 27313/149022 28040/149022 28955/149022 29718/149022 30651/149022 31634/149022 32616/149022 33546/149022 34431/149022 35428/149022 36336/149022 37006/149022 37914/149022 38756/149022 39715/149022 40618/149022 41524/149022 42513/149022 43327/149022 44276/149022 45244/149022 45931/149022 46805/149022 47615/149022 48607/149022 49482/149022 50439/149022 51409/149022 52248/149022 53178/149022 54142/149022 54852/149022 55694/149022 56474/149022 57439/149022 58356/149022 59295/149022 60273/149022 61095/149022 62055/149022 63023/149022 63800/149022 64632/149022 65361/149022 66349/149022 67240/149022 68126/149022 69117/149022 69982/149022 70889/149022 71863/149022 72685/149022 73535/149022 74303/149022 75243/149022 76133/149022 77117/149022 78083/149022 78898/149022 79887/149022 80862/149022 81528/149022 82420/149022 83224/149022 84208/149022 85147/149022 86074/149022 87048/149022 87950/149022 88935/149022 89841/149022 90647/149022 91464/149022 92347/149022 93320/149022 94198/149022 95183/149022 96113/149022 96948/149022 97885/149022 98828/149022 99538/149022 100419/149022 101178/149022 102152/149022 103077/149022 104004/149022 104953/149022 105853/149022 106786/149022 107665/149022 108421/149022 109334/149022 110102/149022 111051/149022 111981/149022 112963/149022 113927/149022 114782/149022 115741/149022 116710/149022 117368/149022 118196/149022 119060/149022 120021/149022 120918/149022 121906/149022 122812/149022 123652/149022 124574/149022 125546/149022 126288/149022 127130/149022 127908/149022 128888/149022 129857/149022 130758/149022 131336/149022 131526/149022 131717/149022 131922/149022 132099/149022 132279/149022 132469/149022 132724/149022 132915/149022 133151/149022 133373/149022 133567/149022 133815/149022 134032/149022 134186/149022 134382/149022 134568/149022 134776/149022 134998/149022 135214/149022 135427/149022 135636/149022 135852/149022 136072/149022 136277/149022 136465/149022 136595/149022 136797/149022 136988/149022 137231/149022 137440/149022 137635/149022 137803/149022 138004/149022 138229/149022 138384/149022 138603/149022 138809/149022 139012/149022 139242/149022 139443/149022 139634/149022 139847/149022 140057/149022 140247/149022 140449/149022 140595/149022 140794/149022 141025/149022 141250/149022 141431/149022 141631/149022 141807/149022 142042/149022 142246/149022 142446/149022 142635/149022 142836/149022 143039/149022 143247/149022 143456/149022 143677/149022 143897/149022 144102/149022 144330/149022 144498/149022 144662/149022 144919/149022 145109/149022 145338/149022 145546/149022 145739/149022 145968/149022 146173/149022 146367/149022 146529/149022 146723/149022 146922/149022 147121/149022 147331/149022 147516/149022 147711/149022 147910/149022 148119/149022 148296/149022 148483/149022 148675/149022 148884/149022 done Creating journal (262144 blocks): mkfs.ext4: Attempt to read block from filesystem resulted in short read while trying to create journal In the end, the filesystem was not created, but the error diagnostics above is not what I expected. I remember OMV3 told me there is a limit on the size of filesystem; OMV4 did not. Perhaps there was (is?) a hole somewhere in filesystem tools that allowed me to do a stupid thing and create a filesystem that was unsafe to use on a 32-bit system. "Short read" (see at the very end of the message above) was also the predominant mode of failure on the previous filesystem. Even so, whatever garbage I had on the RAID array should not have resulted in segmentation faults when trying to read files from the filesystem.
  4. The point about the 16Tb limit is an interesting one; I remember being unable to create, via OMV, the array that would take all available physical space, and had to settle to the maximum offered by OMV. I also remember that ext4 is not limited by 16Tb; the limit is in the tools. Here is lvdisplay: $ sudo lvdisplay [sudo] password for alexcp: --- Logical volume --- LV Path /dev/omv/public LV Name public VG Name omv LV UUID xGyIgi-U00p-MVVv-zlz8-0quc-ZJwh-tuWvRl LV Write Access read/write LV Creation host, time helios4, 2018-02-10 03:48:42 +0000 LV Status available # open 0 LV Size 18.19 TiB Current LE 4768703 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 4096 Block device 254:0
  5. cp fails, as does SMB network access to the shared folders. A fresh Debian Stretch install behaves identically to the not-so-fresh, and the previously installed OMV3 on Debian Jessie, the SD card with which I still have around, shows the same "Internal error: Oops: 5 [#1] SMP THUMB2". At this point, I tend to believe this is a hardware issue of sorts, maybe something as simple as a faulty power brick. Too bad it's not the SPI NOR flash, the solution to which is known. Oh well. Over the next few days, I will assemble another rig and will try to salvage the data through it. @gprovost: thank you for your helping out with this issue!
  6. No luck with rsync. With the faulty HDD physically disconnected, an attempt to rsync the array to either a local USB drive or a hard drive on another machine invariably ends up in segmentation fault and system crash as before. Would I be able to access the filesystem on the RAID if I connect the HDDs to an Intel-based machine running Debian and OMV? I have a little Windows desktop with four SATA ports. I should be able to set up Debian and OMV on a USB stick and use the SATA for the array.
  7. The array's re-building was completed overnight, see below. I will try rsync later today to see if the data can be copied. $ sudo mdadm -D /dev/md127 [sudo] password for alexcp: /dev/md127: Version : 1.2 Creation Time : Sun Feb 4 18:42:03 2018 Raid Level : raid10 Array Size : 19532611584 (18627.75 GiB 20001.39 GB) Used Dev Size : 9766305792 (9313.88 GiB 10000.70 GB) Raid Devices : 4 Total Devices : 3 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Fri Nov 9 05:34:14 2018 State : clean, degraded Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Name : helios4:OMV (local to host helios4) UUID : 16d26e7c:3c2aeef9:ec7cdf93:ca0fbfa5 Events : 32902 Number Major Minor RaidDevice State 0 8 16 0 active sync set-A /dev/sdb 1 8 32 1 active sync set-B /dev/sdc - 0 0 2 removed 3 8 64 3 active sync set-B /dev/sde $ cat /proc/mdstat Personalities : [raid10] [raid0] [raid1] [raid6] [raid5] [raid4] md127 : active raid10 sdb[0] sde[3] sdc[1] 19532611584 blocks super 1.2 512K chunks 2 near-copies [4/3] [UU_U] bitmap: 21/146 pages [84KB], 65536KB chunk
  8. Re-add worked. Note sda is now the USB drive, so what was sda before is now sdb, etc. $ sudo mdadm --manage /dev/md127 --re-add /dev/sdb mdadm: re-added /dev/sdb $ sudo mdadm -D /dev/md127 /dev/md127: Version : 1.2 Creation Time : Sun Feb 4 18:42:03 2018 Raid Level : raid10 Array Size : 19532611584 (18627.75 GiB 20001.39 GB) Used Dev Size : 9766305792 (9313.88 GiB 10000.70 GB) Raid Devices : 4 Total Devices : 3 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Fri Nov 9 04:59:43 2018 State : clean, degraded, recovering Active Devices : 2 Working Devices : 3 Failed Devices : 0 Spare Devices : 1 Layout : near=2 Chunk Size : 512K Rebuild Status : 0% complete Name : helios4:OMV (local to host helios4) UUID : 16d26e7c:3c2aeef9:ec7cdf93:ca0fbfa5 Events : 32891 Number Major Minor RaidDevice State 0 8 16 0 spare rebuilding /dev/sdb 1 8 32 1 active sync set-B /dev/sdc - 0 0 2 removed 3 8 64 3 active sync set-B /dev/sde
  9. Lacking a spare SATA cable, I swapped sda and sdc cables. dmesg still shows errors for sdc, so it must be a faulty HDD - a first for me, ever. mdadm -D /dev/md127 gives the following: I created the encrypted partition when I was originally setting up the Helios - it was part of the setting up instructions - but I never used it. I PMed to you the logs. Also, I disconnected sdc and tried to rsync the RAID array to a locally connected USB drive as before. After copying a bunch of files, I got the following; this the sort of messages I was getting before:
  10. Thank you for the quick reply. I confirm there is no mtdblock0 device listed by lsblk. My Helios is fitted with 4x WD100EFAX HDDs, each rated for 5V/400mA, 12V/550mA. The Helios itself is powered by a 12V 8A brick. I tried copying files over SMB with no devices connected to the USB ports, with the same result: one or a few files can be copied without issues, however and attempt to copy a folder crashes the system. armbianmonitor -u output is here: http://ix.io/1reV
  11. Hello, Is there an easy way of disabling SPI to get rid of ATA errors? I am running OMV4 on a pre-compiled armbian stretch. When I try backing up the RAID array, either to a locally connected USB drive using rsync, or over network using SMB, after copying a few files, I end up with ATA errors, segmentation faults, or system crashes.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines