Subject Re: [IB-Architect] Backups of large database & super transactions
Author Jim Starkey
At 04:59 PM 6/16/00 -0400, Doug Chamberlin wrote:
>At 6/16/00 04:17 PM (Friday), Jim wrote:
>>I take it that you are unhappy both the time to back up a database
>>containing many large and largely stable images and the time it takes to
>>restore the database in face of a disaster.
>I would add that the size of the backup file is also a problem for very
>large databases. This is, of course, related to the time it takes to backup
>and restore but is also a separate problem which should be addressed.

How would the following strategy grab you:

1. A shadow is spun off.
2. Periodically (nightly?) the shadow is re-synced to the
primary on a page by page basic (page version numbers
would avoid walking non-volatile portions).
3. A journal stream flushed after resyncs.

This would minimize both IO and recovery time at the cost of a full
more or less online clone of the database. This is probably worst
case for disk usage. Disk usage can be cut down by storing data
only (gpre strategy) at a huge cost in recovery resources or by
compression (big hit at backup time). So, Mr. Chamberlin, describe
your priorities: disk usage, cpu on backup, cpu (and disk rattling)
on restore. What would convince you (or your employer) to go to the
disk store? Or what are you will to absorb to avoid the trip?

Jim Starkey