|Subject||RE: [Firebird-Architect] Re: [Firebird-devel] New backup technology|
> -----Original Message-----That was one of my pet projects while I was at Borland that I never got
> From: Artur Anjos [mailto:artur@...]
> Sent: Wednesday, July 09, 2003 1:00 PM
> To: Firebird-Architect@yahoogroups.com
> Subject: Re: [Firebird-Architect] Re: [Firebird-devel] New backup
> > I think a strategy well worth considering is to implement a "clone
> > database" operation using
> > the "create shadow" code, but closes the file as soon as the copy is
> > complete. As mentioned
> > earlier, this a page level copy with the engine propagating random
> > access page updates below
> > the running high water mark to the shadow. Following the operation, the
> > clone is an exact
> > copy of the database file as it existed when the last page was written.
> > The clone database
> > can either be copied or just retained.
> We have been just talking about this option in fb-support. The
> actual state
> is that we have an option to create a shadow at any time, and the problem
> remains in stopping it.
> DROP SHADOW will delete the shadow file as well, so this is not an option.
> Also, we have an 'obscure parameter' (obscure=not documented) in
> gfix, -kill, that will make the database stop using the shadow.
> I'm really a
> newbie in Fb source code, but it seems to me that the kill
> implementation is
> just 'stop it, I don't care about nothing'.
> (Please someone correct me here, but looking at the source it seems to me
> that this operation just deletes the shadow reference in internal tables,
> and change some flags to inform the engine to stop using it)
> AFAIU, some work on this "kill" method will be enough to give us
> a workable
> backup file 'on the fly'.
around to doing -- backups via making a shadow and cutting the shadow
loose. Personally, I think this is a quick & clean approach
to solving the problem.
The "tag each page and write to a log file" approach sounds
remarkably like the transaction logging that was attempted in
1991 and was never able to be debugged before 4.0 needed
to be shipped. Mixed in with a little of Oracle rollback
That approach certainly has some advantages when size of database
is very large relative to the number of pages that will
be changed during the backup interval (eg: log file
will be small) -- but remember that log file has to
be reapplied should the db crash during the backup
period -- furthermore
you have all the on-disk consistency issues with
the log file that are already solved in current database
files. This breaks the current "instant recovery
after crash" ability.
It also adds some complexity - while Oracle DBAs would
have no problem understanding the need for database files
and logging files during backup - it is a new layer
of complexity for existing Interbase/Firebird users.