Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss] On-site backups revisited - rsnapshot vs. CrashPlan



On 02/22/2013 03:29 PM, Rich Braun wrote:
> Rich Pieri noted:
>> What possible benefit is there to using a one-off custom database when
>> the problem has already been solved? ... It only works
>> with custom tools designed to work with that custom database just like
>> Legato Networker (or whoever owns it now) or any of a plethora of
>> vendor lock-in "solutions".
> Well, I'll hazard a guess that in 90%+ of cases, anyone who has ever
> approached the backup problem has come up with a one-off custom database as an
> integral component of their solution.  Look behind the scenes and chances
> you'll find a database, by any other name, lurking underneath.  You need a
> list of files, you need a place to put file meta data, you need a way to run
> comparisons.
>
> That has nothing whatsoever to do with whether Rich Braun is going to lock a
> potential user into a particular solution, even if my code were to be
> published and used. I'm not following your argument.
>
>> I'm not against teaching. I am against the idea that "let me throw a
>> database at it" is ever a good answer. Backups need to be simple to
>> create and simple to restore. Anything that complicates these two
>> requirements is to be avoided.
> Different strokes, different folks.  You can avoid complexity, you can avoid
> databases, whatever.  Those are your choices.
>
> I pretty much never choose a solution based on hard-and-fast criteria like
> those.  Each reader here makes their own choices, and I'm sure many agree with
> you.  But on this matter, you and I do not.
>
Rich,
rsnapshot and the other rsync-based backups do not use databases. What
this does is to create a duplicate directory tree of what you backed up.
Each directory tree lives under a root tree called either "hourly.n,
daily.n, or monthly.n. Simple straightforward. And, as I mentioned, you
can simply take a checksum and leave the checksum file in the "hourly.n"
directory. It is reasonably efficient because it used hard links so
there are really no duplicated files. If you have to restore a file or
directory, it is there in the same relative place as the original. The
problem with most other backups I have used is that if a user needs to
restore a file from 3 days ago, you just go to daily.2/.../<location of
the file>. In addition all file ownerships and permissions are fully
preserved.

-- 
Jerry Feldman <gaf at blu.org>
Boston Linux and Unix
PGP key id:3BC1EB90 
PGP Key fingerprint: 49E2 C52A FC5A A31F 8D66  C0AF 7CEA 30FC 3BC1 EB90





BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org