Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RAID Munging Question



On Fri, Apr 05, 2002 at 12:47:04PM -0500, Matthew J. Brodeur wrote:
>    Note that the system will probably NOT keep running if one drive fails.  
> My experience has been that when an IDE drive goes down the system becomes 
> very unhappy, but this may be different with newer hardware/kernels.  

In looking at a HOWTO and I forget what else, I read that things can
keep running.  There seems to be the opinion that two RAID 1 disks
should not be on the same IDE controller because some failure modes of
a disk could bring down the controller (and it is faster being on two
controllers), so I have my disks on hda and hdc.  (Let's hear it for
dual controller motherboards.)  I do have my CD-ROM on hdb because I
couldn't figure out how to boot from the PCI IDE controller I bought.
I can imagine the hda disk foiling the CD, but the CD is not likely in
use except when I am there.

>    Not to start an argument, but isn't RAID 1 actually slower than single 
> disk access?  It would seem that writing to two drives would take longer 
> than writing one, especially with IDE.

For writing, yes.  But that price I am willing to pay.  Reading, on
the other hand, is not only a more common operation than writing, but
the current kernel code apparently issues different reads to both
disks for different parts of the requested data, reading the whole
thing faster.

> >   3. Edit /etc/raidtab so /dev/md5 line that currently reads
> >      "raid-level 0" will read "raid-level 1",
> > 
> >   4. "# unmount /home"
> 
>    I'd swap these two, just for sanity.  It shouldn't matter one bit if 
> you change raidtab while the FS is mounted, but it seems like a bad idea.  

Very good point.


> You should also verify that the other raid options, such as 
> "chunk-size", make sense.

Funny you should mention that.  RH wrote raidtab with "chunk-size
64k"-lines all over the place, but on booting I see repeated "RAID
level 1 does not need chunksize! Continuing anyway."-messages, so I
guess this is a fine time to remove those lines.

>    The only thing I didn't see was unpacking the tarball of the original 
> /home.  You probably would have noticed that on your own, though. ;)

Er, yeah.  I might have noticed it quicker than I noticed my too-big
/home.

>    Other than that I think it'll work.  It looks like you've got all the 
> steps for adding a new RAID volume, which is essentially what you'll be 
> doing.  


Thanks a bunch,

-kb, the Kent who will cross his fingers and hold his nose with this
project sometime over the weekend.




BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org