Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

LVM, usb drives, Active Directory



I would agree with Dan. Multiple bay SATA enclosures come in different
price ranges and sizes. You will get better throughput with this than
you will with USB. Additionally using RAID 1, RAID 5, or RAID6 would
give you the failure protection. RAID0 gives you the ability to link
your drives together (striped sets), but you get no redundancy.
Basically, Linux gives you the flexibility to do most of the RAID
features. You also need to look at the power of the server. This is why
many commercial systems use dedicated RAID controllers. Basically I am
seeing 5-bay enclosures for under $100.

The bottom line is that you are much better off going with direct SATA
rather than trying to stripe 10 USB drives together.



On 12/15/2009 10:17 AM, Dan Ritter wrote:
> On Tue, Dec 15, 2009 at 07:49:32AM -0500, Scott Ehrlich wrote:
>  =20
>> I have a client with a handful of USB drives connected to a CentOS
>> box.   I am charged with binding the USB drives together into a single=

>> LVM for a cheap storage data pool (10 x 1 TB usb drives =3D 10 TB chea=
p
>> storage in a single mount point).
>>    =20
> =3D very likely to be unusable.
>
> Let's suppose you expect a drive to die once in four years, on
> average, at which point you will replace it.=20
>
> For a ten-disk filesystem as you have outlined, you can expect
> downtime and loss of data every 5 months or so.
>
> USB tends to be fragile over the long run, too. In my
> experience, you can assume that a USB-connected disk will need a
> bus reset every 6-8 weeks. If it's part of an LVM filesystem as
> above, that's another downtime + potential loss of data event.
>
> If you want to have a 10 TB filesystem which has reliability as
> the first goal, and cheapness as the second, I would move the
> disks into (e)SATA enclosures, which will solve the USB
> flakiness problem, and get more of them, to do Linux software
> RAID. Best would be RAID10 with 20 disks, although overall it
> may be cheaper to do RAID10 with 10 2TB disks. If you really
> have to go cheap, RAID6 with 7 2TB disks.
>
> Or.
>
> Do you really need to present a 10 TB filesystem, or is it just
> for convenience in mounting? I believe you could offer an export
> of /storage, and have the machine with all the disks mount them
> on /storage/1, /storage/2, etc. That way you don't do RAID or
> LVM concatenation at all, and you don't get the ability to store
>  >1TB files, but you do get loss isolation -- if /storage/4 goes
> down, you only have to recover that 1 TB volume and the rest of
> the filesystems are still available.
>  =20
>


--=20
Jerry Feldman <gaf-mNDKBlG2WHs at public.gmane.org>
Boston Linux and Unix
PGP key id: 537C5846
PGP Key fingerprint: 3D1B 8377 A3C0 A5F2 ECBB  CA3B 4607 4319 537C 5846








BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org