Subject | Re: [Firebird-Architect] Re: Incremental Backups |
---|---|
Author | Lester Caine |
Post date | 2004-09-15T06:40:27Z |
Olivier Mascia wrote:
the data. Dumping 100Gb from a server to tape may be easy, but putting
the system back together after a major failure never is. ( I never did
manage to rebuild a unix system from it's backup tape ;) )
I prefer to separate programs from data, and manage the data, and in
most cases a single CD does for a nightly backup.
Large systems, which I am hopefully going to start becoming involved
with, have a similar problem. The bulk of the data is not changing on a
daily basis, only the links round the edge change, so to copy 100Gb of
data every night is simply a waste, you only want the changes hence the
incremental backup.
I still prefer files to blobs, for exactly the above reason. The bulk
static data can be stored in several sites, if the database throws a
wobbly, there is only a small area to fix, and one can get running
quicker than if the whole thing has to be reloaded. Once the core system
is restored, the fine detail can be loaded as required. If the system
structure is built correctly, the end users need never know that a
machine has failed?
--
Lester Caine
-----------------------------
L.S.Caine Electronic Services
>>Even having concededThe one thing that I am becoming more concious of is the need to MANAGE
>>that the implementation is nothing, does support for serial devices make
>>sense any more?
>
> I'd say for mobility reasons. Storing a tape away (or shipping it at
> some remote location) is probably still more easy than shipping a disk
> in its tray.
the data. Dumping 100Gb from a server to tape may be easy, but putting
the system back together after a major failure never is. ( I never did
manage to rebuild a unix system from it's backup tape ;) )
I prefer to separate programs from data, and manage the data, and in
most cases a single CD does for a nightly backup.
Large systems, which I am hopefully going to start becoming involved
with, have a similar problem. The bulk of the data is not changing on a
daily basis, only the links round the edge change, so to copy 100Gb of
data every night is simply a waste, you only want the changes hence the
incremental backup.
I still prefer files to blobs, for exactly the above reason. The bulk
static data can be stored in several sites, if the database throws a
wobbly, there is only a small area to fix, and one can get running
quicker than if the whole thing has to be reloaded. Once the core system
is restored, the fine detail can be loaded as required. If the system
structure is built correctly, the end users need never know that a
machine has failed?
--
Lester Caine
-----------------------------
L.S.Caine Electronic Services