Jump to content

OPI pc USB HDD issue


ning

Recommended Posts

hi,

 

I have an OPI pc with armbian (legacy kernel), plan to set it as NAS.

due to it has 4 USB ports without internal hub, so I decide to connect 3 USB HDD to it, and create a LVM+RAID5.

 

but unfortunately, OPI pc can only recognize 2 USB HDD(any 2 of these 3 USB hdd), at any USB ports, the 3rd USB HDD can be found by lsusb but stops at "spinning up...."

no /dev/sdc.

 

but if I insert an USB disk, (with 2 USB HDD already inserted), it can be recognized and /dev/sdc is created.

 

 

I guess whether there is a power supply issue. can this be fixed by USB HDD with additional power supply?

 

 

is that right?

Link to comment
Share on other sites

Yes, this is a power supply issue. And the whole idea is insane and just a waste of ressources. RAID5 is crap, especially when combining a lot of unreliable hardware.

 

If you really care about availability better use vanilla kernel and a btrfs mirror (way better than mdraid these days) but if you care about data security/reliability then start to care also about data integrity, use one disk with btrfs for your data and another larger one for backup (read as: doing btrfs snapshots and sending them to the other device).

Link to comment
Share on other sites

thank you.

 

you are right, combining a lot of unreliable hardware is not a good ideal.

right now, I have setup LVM stripped volume for two USB HDD.

 

for date integrity, i plan to use another disk to backup data..

Link to comment
Share on other sites

right now, I have setup LVM stripped volume for two USB HDD.

 

If I were you I would take latest vanilla beta image from http://image.armbian.com/betaimages/ and then stop using stuff from last decade (mdraid, LVM) but rely on better stuff. I don't see the need for a stripe (RAID-0) here since OPi PC has just Fast Ethernet so you will always be bottlenecked by networking but by using btrfs you could use a combination of stripe and mirror: Striped data and mirrored metadata. Since btrfs uses checksums this allows to reliably check for data integrity even with a stripe for data:

mkfs.btrfs -m raid1 -d raid0 /dev/sda /dev/sdb

You can then run regular scrubs and get a warning if data got corrupted since checksums are mirrored on both drives. Does not protect against failing drives but using a 3rd drive to store snapshots (backup) is a good way to deal with this since most reasons for data loss are not dying disks but simple mistakes.

 

Anyway: I would avoid RAID at all since the only purpose is increased availability and that's a joke with this kind of hardware anyway. If it's about data and not 99.99999% availability choosing a good backup strategy and looking after data integrity is always the better choice. :)

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines