Subject Re: [IB-Architect] Full and Differential Backup
Author Olivier Mascia
From: "Jim Starkey" <jas@...>

First, thank you very much for your concerned and quick review of my
somewhat fragile or clumsy initial proposal.

| By design, disk page writes are scheduled so the database on disk
| is always valid and consistent. This is not true of dirty pages
| in cache. You are much safe considering page changes only the
| the engine would otherwise be writing a page to disk.

Very well.

| May I make two suggestions to simplify this?
|
| First, relax the requirement that pages be written in order. The
| engine is always free write a page above the backup high water mark.
| If if wants to write a page already copied to the backup, just write
| the new page to the file. Simple and works like a charm. Both
| the journalling code and the snapshot code use this hack.

Great ! I was blind not to see this simplification !

| Second, if you are willing to assume the backup file is a disk file
| rather than a streaming device, create a page by page image of
| the database file -- secondary writes (previous paragraph) just
| overwrite previous writes. This has the added benefits of a)
| reducing the size of the backup file and b) eliminating the restore
| step. Also notice that this is the existing shadow creation code.
| Other than ripping out the check for non-local files and adding
| a clean mechanism to detact the shadow, the work is done.

I liked the idea of being able to stream instead of doing direct access, but
the so-close relationship with the shadowing code now makes me think twice.
I think I would tend to prefer your suggestion, because it clearly leads to
better reuse of the existing architecture. My main will to stream was to be
able to compress the output on the fly. But some OSes now offer file
compression at the file system level, file per file. It could then just be a
matter of setting the right creation flag on those shadow files, and bingo.

| >
| >8. Now for pass two. During this phase, any page going to be touched by
the
| >engine (existing pages, or new pages added at the end of the database
file)
| >has to be logged in append to the log (not the page content, just its
| >address).
|
| Skip this. If you add pages out of order to the backup (serial or
| random access), this pass isn't needed.

Agreed, I have already got rid of my pass 2 in my mind, following your above
initial suggestion.

| >
| >9. To restore such a backup, the restore utility reads sequentially the
| >pages stored during phase1 and reconstruct (sequential process) the
database
| >files.
|
| Hey, we can skip this too. We're on a role. Take the detached shadow,
| call it a backup file. Move it to your tape farm. When the bad
| day comes, the restore utility is the copy command.

Yes. So more and more this idea turns in an extension of the shadow
processing. Basically, have a way to properly activate (so build) then
detach a shadow, without needing to stop engine processing.

| >10. For differential backups, one could imagine to have a backup counter
in
| >each page. And a "last backup done" number in a main header page for the
| >database. After a full backup, the idea would be to increase the global
| >counter to the highest backup number encountered in any of pages. So
| >basically we know after a full backup that the last time a full backup
was
| >done, we backed up all pages marked with that number or a smaller one.
| >
|
| Bingo. Give the man a kupee doll. But rather than using it as an
| incremental backup counter, use the same word as a page revision
| number. If we tweak the database to maintain a separate (pages? file?)
| set of current page numbers, a backup can be synchronized with the
| active database by comparing record version numbers. There are
| some trickinesses here that can be fun to work out. The problems are
| maintaining a satisfactory level of reliability (or figure a way to
| cope with unreliability) without taking a performance hit.

You go far further than my original intent ! But despite the real tricky
issues at getting this right, that might be a really nice extension.

| There is a school of thought that suggests that depending on information
| stored in a object to restore the object in the face of disaster has
| certain disadvantages... On the other hand, doing the accounting for
| a poor mortal can determine his exposure should the computer gods have
| an off day is very important.

:-)

| >
| >My main goal here is to make sure someone will at least verify (or help
me
| >verify when I'll be able to look at the code) if this backup scheme has
any
| >chances of being implementable without too much hacking in the current
| >engine code.
| >
| >This is certainly not a perfect disaster recovery scheme.
| >
|
| No, but an excellent start.
|
| Jim Starkey

Thank you Jim. Let's come back to this discussion when I'll have had the
opportunity to learn from the engine code. Or sooner if other people with
that knowledge jump on this before.

Olivier Mascia
om@...