Subject | Super-transactions and Incremental Backup |
---|---|
Author | Jim Starkey |
Post date | 2000-06-20T16:11:34Z |
Jason has proposed a scheme to support incremental backups. His
proposal touches upon a fair number of important architectural
mechanisms with the engine and toolset, and is worthy examining
in some detail. During the discussion I suspect several components
of the proposal will drift, but let's stick with the various pieces
separately and try to bring them back together at the end.
In the interest of creativitity, I'll try and limit my remarks
to questions and elaborations on the current architecture and tricky
problems. If we get mired, I may suggest a different direction.
To improve the tractibility of disaster recovery for large,
essentially stable databases, Jason proposed a system involving:
1. A mechanism to create a stable freeze point within a database
from which a backup may be done and post freeze point updates
can be located.
2. An incremental backup utility to backup record versions created
post freeze point.
3. An incremental restore utility to apply incremental backups
to database restored to a freeze point.
The backup strategy would be as follows:
1. DBA creates freeze point.
2. DBA performs backup to freeze point.
3. DBA periodically takes incremental backups from freeze point to
current state.
4. To recreate the database, DBA restores database from #2 to
freeze point, then applies incremental backups.
Question 1: Freeze Point Implementation
Could you describe what information is kept internally to implement
freeze points? Do keep in mind that update transactions may be in
progress at the freeze point that may either commit, rollback, or
enter limbo subsequent to the freeze point declaration.
Question 2: Garbage Collection
I gather that the idea is to retain the newest pre-freeze point
version of a record during garbage collection. How is that
record identified? Keep in mind that that version will require
special handling beyond just identification. Old record versions
are usually stored as deltas from the succeeding version. In
the case of the pre-freeze point version, the succeeding version
will be volatile, so pre-freeze point version will need to be
restored as a full record.
To preserve database integrity for classic, all update algorithms
must be careful write (meaning an object must be written before
a pointer to it), and to avoid deadlocks, all on disk data structures
must be traversed in a prescribed order with "handoffs" (object
is fetched then pointer is released). In specific, record version
chains must be traversed version by version, releasing page
lock on the prior version. Because you can never back up directly,
if you need the prior record, you must go back to pointer page,
the data page, record version to record version, while dealing with
the fact that somebody else may have intervened, changing almost
anything, even doing the garbage collection for you. Needless to
say, this makes garbage collection very, very tricky. The trick
in this case will be to fetch the garbage collect chain, find the
survivor (pre-freeze point version), replace the back pointer
of the head record with the survivor, then finish index and blob
garbage collection from the version chain. Poor Ann spent about
three months getting the last of the race condition bugs out of
VIO_backout, so prepare for a challenge.
Question 3: Garbage Collection
The engine currently does a garbage collection test on every record
visit, but the test is dirt cheap: If a record is older than the
oldest interesting transaction and has a back pointer, it's time
to garbage collect. If your scheme, many non-garbage collectable
records will appear to be garbage collect candidate by the above
test. The only way to find out whether a garbage collection cycle
is required is to follow the back pointer to fetch the old version
and get its transaction id. Whether or not it is collectable, the
engine will need to re-fetch the head record by re-fetching the
pointer page then the data page. These are virtually guarenteed
to be in cache, but the overhead is significant. Do you have any
ideas on how to avoid this retrieval performance hit?
Question 4: Architecture (ok, almost a hint)
Why do you need to reconstruct a freeze point? Would an alternative
mechanism that identified post-freeze point transactions without
impacting garbage collection be sufficient? After all, the incremental
backup doesn't care about the world before the freeze point, just
after.
Question 5: Backup Utility
Is this a totally new utility or a tweak to gbak? If gbak, which
uses the published API, how does it tell the engine that it wants
only incremental records? Do we need a new API, new request options,
new BLR, new SQL, or what?
Question 6: Backup/Restore Phantoms
I think I understand how the backup utility gets records created
or modified (though not how to tell which is which) post freeze-point,
but I don't understand how it gets records deleted post freeze-point?
Question 7: Backup/Restore Limbo Transactions
How does backup (and restore) handle transactions in limbo?
Question 7: Restore Strategy
How does restore work? Does it use the published API? How does
restore know when to insert a record, modify a record, or delete
a record?
Question 8: Restore Referential Integrity
How does restore handle referential integrity checks when records
have been backed up in physical order?
Jim Starkey
proposal touches upon a fair number of important architectural
mechanisms with the engine and toolset, and is worthy examining
in some detail. During the discussion I suspect several components
of the proposal will drift, but let's stick with the various pieces
separately and try to bring them back together at the end.
In the interest of creativitity, I'll try and limit my remarks
to questions and elaborations on the current architecture and tricky
problems. If we get mired, I may suggest a different direction.
To improve the tractibility of disaster recovery for large,
essentially stable databases, Jason proposed a system involving:
1. A mechanism to create a stable freeze point within a database
from which a backup may be done and post freeze point updates
can be located.
2. An incremental backup utility to backup record versions created
post freeze point.
3. An incremental restore utility to apply incremental backups
to database restored to a freeze point.
The backup strategy would be as follows:
1. DBA creates freeze point.
2. DBA performs backup to freeze point.
3. DBA periodically takes incremental backups from freeze point to
current state.
4. To recreate the database, DBA restores database from #2 to
freeze point, then applies incremental backups.
Question 1: Freeze Point Implementation
Could you describe what information is kept internally to implement
freeze points? Do keep in mind that update transactions may be in
progress at the freeze point that may either commit, rollback, or
enter limbo subsequent to the freeze point declaration.
Question 2: Garbage Collection
I gather that the idea is to retain the newest pre-freeze point
version of a record during garbage collection. How is that
record identified? Keep in mind that that version will require
special handling beyond just identification. Old record versions
are usually stored as deltas from the succeeding version. In
the case of the pre-freeze point version, the succeeding version
will be volatile, so pre-freeze point version will need to be
restored as a full record.
To preserve database integrity for classic, all update algorithms
must be careful write (meaning an object must be written before
a pointer to it), and to avoid deadlocks, all on disk data structures
must be traversed in a prescribed order with "handoffs" (object
is fetched then pointer is released). In specific, record version
chains must be traversed version by version, releasing page
lock on the prior version. Because you can never back up directly,
if you need the prior record, you must go back to pointer page,
the data page, record version to record version, while dealing with
the fact that somebody else may have intervened, changing almost
anything, even doing the garbage collection for you. Needless to
say, this makes garbage collection very, very tricky. The trick
in this case will be to fetch the garbage collect chain, find the
survivor (pre-freeze point version), replace the back pointer
of the head record with the survivor, then finish index and blob
garbage collection from the version chain. Poor Ann spent about
three months getting the last of the race condition bugs out of
VIO_backout, so prepare for a challenge.
Question 3: Garbage Collection
The engine currently does a garbage collection test on every record
visit, but the test is dirt cheap: If a record is older than the
oldest interesting transaction and has a back pointer, it's time
to garbage collect. If your scheme, many non-garbage collectable
records will appear to be garbage collect candidate by the above
test. The only way to find out whether a garbage collection cycle
is required is to follow the back pointer to fetch the old version
and get its transaction id. Whether or not it is collectable, the
engine will need to re-fetch the head record by re-fetching the
pointer page then the data page. These are virtually guarenteed
to be in cache, but the overhead is significant. Do you have any
ideas on how to avoid this retrieval performance hit?
Question 4: Architecture (ok, almost a hint)
Why do you need to reconstruct a freeze point? Would an alternative
mechanism that identified post-freeze point transactions without
impacting garbage collection be sufficient? After all, the incremental
backup doesn't care about the world before the freeze point, just
after.
Question 5: Backup Utility
Is this a totally new utility or a tweak to gbak? If gbak, which
uses the published API, how does it tell the engine that it wants
only incremental records? Do we need a new API, new request options,
new BLR, new SQL, or what?
Question 6: Backup/Restore Phantoms
I think I understand how the backup utility gets records created
or modified (though not how to tell which is which) post freeze-point,
but I don't understand how it gets records deleted post freeze-point?
Question 7: Backup/Restore Limbo Transactions
How does backup (and restore) handle transactions in limbo?
Question 7: Restore Strategy
How does restore work? Does it use the published API? How does
restore know when to insert a record, modify a record, or delete
a record?
Question 8: Restore Referential Integrity
How does restore handle referential integrity checks when records
have been backed up in physical order?
Jim Starkey