Jump to content

Best redundant scheme or filesystem for ARM SBC


Recommended Posts

Posted

Hi all

I would like to start this topic to discuss on the best approach for having some degree of data integrity using ARM SBC (aarch64 SoC in primis)

My goal is to set a big archive, I have in program to build 5x8TB HD, scheme TBD

The goal is to have at least one hard drive failure tolerance, it is acceptable that some data is lost during rebuild, since the main goal of the archive is to store picture and video, so it is tolerable that few pictures or video get corrupted, but the rebuild shall not stop in this case

The write performances are not so important, but read performances shall sustain some video streaming, roughly at least 25 Mbytes/s

My enclosure leverage on JMicron JMB394 which has RAID hardware support (apparently it is a real HW RAID, not a fake SW RAID such Intel iw) up to RAID5

Nevertheless, I have verified that unfortunately, it does not produce any metadata that mdadm can understand, so in case the enclosure fails, I don't know how recover the array, if I cannot get another identical enclosure

So I am investigating the possibility of a software RAID.

I have read about the URE statistic during RAID5 rebuild for large single disk storages, it is still not clear to me if in this case, such URE during RAID5 (and also RAID6) rebuild,  the array rebuild will just stop or if it gets rebuilt but with the corresponding files corrupted (first questions)

So, I was more focusing on RAID6 but also investigating ZFS RAID-Z and BTRFS

@tkaiser I have read your post about ZFS, if I understand correctly you reach 4-5Mbyte per seconds in write, while more that 200Mbyte/s in read, on a x64 system, is that correct?

Do you have some figures about MDADM RAID5 or RAID6 and perhaps BRFS that it should work more reliable in kernel > 4.12.x since the problem in RAID5/6 have been fixed (at least the most critical one)?

Let's put some idea on the table, I think it would be valuable for most of us!

Bye

 

Posted

If you indicate "some degree of data integrity" and also "it is tolerable that few pictures or video get corrupted" it looks like your filesystem does not need some flavor of RAID for everything. It would be OK to have redundancy for just metadata at least.  The HW based RAID in the enclosure you mention would be fine, as then you don't use a single SATA link (chipset to host SBC, see http://www.jmicron.com/PDF/brief/jmb394.pdf ) for RAID channels. For SW RAID, you do, so it is not really RAID anymore if consider the SATA interfacing part of the 'Inexpensive Disk'. Also, you should probably check the support/implementation of SATA portmultiplexing in the kernel you intend to use.

I have been using ZFS RAID-Z and BtrFS raid5 and MDADM RAID5 in 2016 (on AMD64 HW). The issue with both ZFS and MDADM RAID that it is not (so) flexible w.r.t. changing amount and sizes of disks once the array is setup and the data is on it. With BtrFS you can change raid/redundancy levels and combinations on-the-fly later on. So having Armbian with kernel 4.14 and 5 8TB disks, you can just start with 1 disk single profile for data and metadata. Then later add 1 disk or more to the filesystem and change also profile for metadata to raid1 and noCoW for data for example.

There is much more to say here, for example that you can clone (by send | receive) as backup and/or offline cold storage, but I would browse/read btrfs wiki and only use raid56 once you are familiar enough with raid1, profile conversion and tools etc. I currently plan to make everything btrfs (SDcard rootfs and data on 3.5inch HDD) Armbian based with as little effort as possible on BananaPro, it worked for almost 2 years with OpenSuse Tumbleweed, but I want it Debian based and solar-powered.

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines