ning Posted December 5, 2016 Posted December 5, 2016 hi, I have an OPI pc with armbian (legacy kernel), plan to set it as NAS. due to it has 4 USB ports without internal hub, so I decide to connect 3 USB HDD to it, and create a LVM+RAID5. but unfortunately, OPI pc can only recognize 2 USB HDD(any 2 of these 3 USB hdd), at any USB ports, the 3rd USB HDD can be found by lsusb but stops at "spinning up...." no /dev/sdc. but if I insert an USB disk, (with 2 USB HDD already inserted), it can be recognized and /dev/sdc is created. I guess whether there is a power supply issue. can this be fixed by USB HDD with additional power supply? is that right?
tkaiser Posted December 5, 2016 Posted December 5, 2016 Yes, this is a power supply issue. And the whole idea is insane and just a waste of ressources. RAID5 is crap, especially when combining a lot of unreliable hardware. If you really care about availability better use vanilla kernel and a btrfs mirror (way better than mdraid these days) but if you care about data security/reliability then start to care also about data integrity, use one disk with btrfs for your data and another larger one for backup (read as: doing btrfs snapshots and sending them to the other device).
ning Posted December 5, 2016 Author Posted December 5, 2016 thank you. you are right, combining a lot of unreliable hardware is not a good ideal. right now, I have setup LVM stripped volume for two USB HDD. for date integrity, i plan to use another disk to backup data..
tkaiser Posted December 5, 2016 Posted December 5, 2016 right now, I have setup LVM stripped volume for two USB HDD. If I were you I would take latest vanilla beta image from http://image.armbian.com/betaimages/ and then stop using stuff from last decade (mdraid, LVM) but rely on better stuff. I don't see the need for a stripe (RAID-0) here since OPi PC has just Fast Ethernet so you will always be bottlenecked by networking but by using btrfs you could use a combination of stripe and mirror: Striped data and mirrored metadata. Since btrfs uses checksums this allows to reliably check for data integrity even with a stripe for data: mkfs.btrfs -m raid1 -d raid0 /dev/sda /dev/sdb You can then run regular scrubs and get a warning if data got corrupted since checksums are mirrored on both drives. Does not protect against failing drives but using a 3rd drive to store snapshots (backup) is a good way to deal with this since most reasons for data loss are not dying disks but simple mistakes. Anyway: I would avoid RAID at all since the only purpose is increased availability and that's a joke with this kind of hardware anyway. If it's about data and not 99.99999% availability choosing a good backup strategy and looking after data integrity is always the better choice. 1
Recommended Posts