Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Journaling file systems revisited



Jerry Feldman wrote:
>Now for the question:
>The laptop gets booted 2 or 3 times a day. At home, I have some file 
>systems I keep unmounted except for backups, so they get mounted daily. 
>Normally they would require periodic full fscks (either by the number of 
>mounts or the time). This can be adjusted via tunefs. Is their any point at 
>which ext3 would require a full fsck through normal mount and unmount. I 
>suspect that reiser rarely would require this. So, in general, I would 
>assume that a journalling file system does not need a periodic equivalent 
>to the fsck. Glenn, I think you have a lot of experience with JFS or XFS. 

Actually, ext2 doesn't really REQUIRE it either.  You can use 'tune2fs
-c 0' on the filesystem and it will never force a check.  If you read
the man page entry for '-c' you'll find it cautioning about 'what if
you have some hardware problem' not that there might be problems in
the ext2 filesystem itself.  I think it's a 'belt and suspenders'
thing.  i.e. If you are bringing your filesystem up and down a lot,
maybe something's wrong in general so lets check anyway.

I'm not sure I see the point of a journaled filesystem in this case.
If you are doing full backups every time, then if something happens to
your machine in the middle of a backup you can just re-mkfs the backup
filesystem and start over.  Why pay the disk/cpu overhead of a
journaled filesystem ALL the time for the rare (and recoverable) case
when something happens in the middle of a backup?  If you do
incrementals then there might be a point.  Even then, only files that
were actively being written during the hardware failure are likely to
be corrupted.  A regular fsck should take care of that fairly easily
leaving previous incrementals untouched.  I'm assumming here that you
are using tar/cpio/dump to create single file archives of entire
filesystems.  If you are keeping your backups as filesystem copies (cp
-R or similar) then journaling makes a lot more sense.

Another thing to consider is that any memory/disk cable caused
corruption that is likely to modify something that fsck or its
equivalent will see has probably already totally f*cked the data
copied during the backup.  Having a nicely journaled copy of garbage
isn't going to be much help...

				   Take care,
				   Bill Bogstad




BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org