Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Disk Recovery Part III



Don Levey <lug at the-leveys.us> wrote:
> I've partitioned one of them as follows ...

This and a couple of the followups lead me to ask:  are you using the logical
volume manager (lvm)?

There are basically two ways to go about partitioning:  you can create
separate primary partitions for each filesystem, or you can create one primary
partition and use the logical volume manager to create logical volumes.

You created four partitions plus swap so I have to ask--is there a good reason
to make this any more complex than the bare essentials?  The bare essentials
would be a root, a /boot, and a swap.  I have reason to create a handful of
separate volumes--to make my backups run a bit more efficiently and to ensure
that runaway applications filling up a user filesystem won't starve the root
fs of storage.

I'm just asking this to prod you into some questions, because now's the time
to simplify your setup if you can.

Note that one of the niceties of the lvm tools is that you can leave some
storage unallocated, and later add it only as necessary to whichever
filesystem is running low on storage.  It's a feature that I discovered first
on an RS6000 system running AIX and I used it to great advantage in a
development shop where certain engineers tended to hog available space--I
could resolve crises quickly and then go after the abusers later.  You won't
run into this on a home system but you will still find lvm tools handy as your
storage requirements change.  (Example, I keep music on one volume and still
pictures on another--from time to time the collections' growth rate shifts
from one to the other.)

You can, if necessary, mix & match multiple RAID partitions with lvm volumes. 
But the keep-it-simple strategy applies here:  only break things out into
multiple partitions if the physical volumes are unavailable during an upgrade;
once you've freed up a drive used to stage the upgrade, re-build the volumes
and re-sync the arrays so everything is clean.

One other point to ask the larger group here:  why is it that the SuSE
installation script insists that I have *both* RAID partitions available
whenever I create a new mirror?  I've been able to create partitions of type
fd and built lvm on top of RAID1 with only a single drive in the past--but
only by running the tools at the command line.  I was inconvenienced the last
time I tried doing this with SuSE's scripts.  Does RH do that too?

-rich





BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org