Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Journaling file systems revisited



These are good points. Before I installed the current drive, I had one of 
the backup partitions set to no check. Since I keep them unmounted most of 
the time, they should not encounter the usual problems such as power fail 
due to wife vacuuming the power cord :-)
On 9 Aug 2002 at 12:36, Bill Bogstad wrote:

> 
> Jerry Feldman wrote:
> >Now for the question:
> >The laptop gets booted 2 or 3 times a day. At home, I have some file 
> >systems I keep unmounted except for backups, so they get mounted daily. 
> >Normally they would require periodic full fscks (either by the number of 
> >mounts or the time). This can be adjusted via tunefs. Is their any point at 
> >which ext3 would require a full fsck through normal mount and unmount. I 
> >suspect that reiser rarely would require this. So, in general, I would 
> >assume that a journalling file system does not need a periodic equivalent 
> >to the fsck. Glenn, I think you have a lot of experience with JFS or XFS. 
> 
> Actually, ext2 doesn't really REQUIRE it either.  You can use 'tune2fs
> -c 0' on the filesystem and it will never force a check.  If you read
> the man page entry for '-c' you'll find it cautioning about 'what if
> you have some hardware problem' not that there might be problems in
> the ext2 filesystem itself.  I think it's a 'belt and suspenders'
> thing.  i.e. If you are bringing your filesystem up and down a lot,
> maybe something's wrong in general so lets check anyway.
> 
> I'm not sure I see the point of a journaled filesystem in this case.
> If you are doing full backups every time, then if something happens to
> your machine in the middle of a backup you can just re-mkfs the backup
> filesystem and start over.  Why pay the disk/cpu overhead of a
> journaled filesystem ALL the time for the rare (and recoverable) case
> when something happens in the middle of a backup?  If you do
> incrementals then there might be a point.  Even then, only files that
> were actively being written during the hardware failure are likely to
> be corrupted.  A regular fsck should take care of that fairly easily
> leaving previous incrementals untouched.  I'm assumming here that you
> are using tar/cpio/dump to create single file archives of entire
> filesystems.  If you are keeping your backups as filesystem copies (cp
> -R or similar) then journaling makes a lot more sense.
> 
> Another thing to consider is that any memory/disk cable caused
> corruption that is likely to modify something that fsck or its
> equivalent will see has probably already totally f*cked the data
> copied during the backup.  Having a nicely journaled copy of garbage
> isn't going to be much help...
> 
> 				   Take care,
> 				   Bill Bogstad


-- 
Jerry Feldman <gaf at blu.org>
Associate Director
Boston Linux and Unix user group
http://www.blu.org PGP key id:C5061EA9
PGP Key fingerprint:053C 73EC 3AC1 5C44 3E14 9245 FB00 3ED5 C506 1EA9





BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org