Subject | Re: [IB-Architect] Backups of large database & super transactions |
---|---|
Author | Jim Starkey |
Post date | 2000-06-16T23:10:20Z |
At 03:19 PM 6/16/00 -0700, Jason Wharton wrote:
chains get longer and longer, data pages get fluffier and fluffier,
the amount of space filled with near useless crud goes up, indexes
fill up with useless values pointing to unreachable records, index
retrievals waste resources chasing deleted records, disk usage
goes, disk IO does up, memory usage go use, the amount of processing
required to find a record goes up. That's an extremely expensive
way to take a snapshot. The resources wasted wallowing in a database
with internals like a landfill dwarf the cost of spinning off a clone.
Sure, it could be done and it would be easy. As soon as the source
comes out, we'll make you a special version with garbage collection
disabled and you can try for a week or two. The bad part won't be
that your application slowly grounds to halt but the inevitable
sweep that happens when garbage collection is re-enabled. It will
finish sometime in the next millenium if you can keep your machine
alive that long.
Database performance is mostly an exercise in maintaining density
and locality while avoiding tying yourself in knots. Turning off
internal maintenance is not that good of an idea.
up the works. But perhaps if I look between the lines I'll discover
that you really mean "remove the inappropriate record version." Or
did I miss get it wrong again? You did say you weren't keeping an
external log of any kind, didn't you?
a special backup in bed with the engine, and something is streaming
data.
A backup can never hurt you, but you're wasting your time with the
restore. There is no loss of performance over a meta-data change.
The system just keeps track of what's what. We designed it that
way to encourage people to keep their databases in sync with the
their understanding of their problem. Anyway, any useful mechanism
has to survive ordinary day to day operations.
think your applications typical. Most folks get a bit testy when they
lose a week's worth of work.
normal update time. If recovery time were the only consideration, things
would be a lot easier.
a database, not provide for full employment for DBAs. If you want
performance in Oracle, learn placement control. If you want both
performance and a life, use InterBase.
stop before the goof. The problem is that you generally don't know
exactly who did what. Unless there is some clue as to when a transaction
started or what it did, the problem is unsolvable. (Hint: there are
2 bits per transaction and four states. How many bits are left over
to record start time?).
Jim Starkey
>Jim,OK, I'm clearing my mind.
>
>> Jason, please start with the requirements. Unless and until you can
>> state what problem you're trying to solve it is impossible to judge
>> whether any particular scheme is a satisfactory solution.
>
>
>AFAIK This is nothing like I am proposing. Please clear your mind of those
>biases.
>
>You're essentially talking about stopping garbage collection. Record
>I'm not talking about maintaining an external "log" of any kind. I'm
>proposing that the existing versioning engine be modified such that the
>garbage collection and transaction management system will efficiently
>protect a "frozen" transaction and make it so that the changes since that
>time can be discernable. I see this as a natural extension/enhancement of
>the versioning engine we already enjoy. The "log" is actually woven through
>the version tree of records. We would just need a way to extract and
>re-apply them.
>
chains get longer and longer, data pages get fluffier and fluffier,
the amount of space filled with near useless crud goes up, indexes
fill up with useless values pointing to unreachable records, index
retrievals waste resources chasing deleted records, disk usage
goes, disk IO does up, memory usage go use, the amount of processing
required to find a record goes up. That's an extremely expensive
way to take a snapshot. The resources wasted wallowing in a database
with internals like a landfill dwarf the cost of spinning off a clone.
Sure, it could be done and it would be easy. As soon as the source
comes out, we'll make you a special version with garbage collection
disabled and you can try for a week or two. The bad part won't be
that your application slowly grounds to halt but the inevitable
sweep that happens when garbage collection is re-enabled. It will
finish sometime in the next millenium if you can keep your machine
alive that long.
Database performance is mostly an exercise in maintaining density
and locality while avoiding tying yourself in knots. Turning off
internal maintenance is not that good of an idea.
>What I think it boils down to is a mechanism to extrapolate a "log" fromSorry, all the versions you'll ever want are already there, clogging
>record versions protected under a saved super transaction. Then, on the
>restore side, it would have to take that log and apply it by creating
>appropriate record versions.
>
up the works. But perhaps if I look between the lines I'll discover
that you really mean "remove the inappropriate record version." Or
did I miss get it wrong again? You did say you weren't keeping an
external log of any kind, didn't you?
>I agree that things like Index(s) should be left out. It should only beNow I am confused. You're not keeping a log of any kind, you've got
>actual data that gets streamed out. I also think that any metadata changes
>should "break" the ability to walk from one freeze point to another. If
>someone needs to alter their table structures then the database should be
>considered incompatible with the other structure. Thus, a DBA would
>establish a new base after performing any structural changes. It is already
>my rule of thumb to do a total backup and restore after altering a database
>structure...
>
a special backup in bed with the engine, and something is streaming
data.
A backup can never hurt you, but you're wasting your time with the
restore. There is no loss of performance over a meta-data change.
The system just keeps track of what's what. We designed it that
way to encourage people to keep their databases in sync with the
their understanding of their problem. Anyway, any useful mechanism
has to survive ordinary day to day operations.
>Does that mean you can tolerate a week's loss of data? If so, I don't
>> So, Jason, in your application:
>>
>
>> 1. How often can you afford to perform a full backup?
>
>No more than once a week, I easily imagine others at more of a monthly
>schedule.
think your applications typical. Most folks get a bit testy when they
lose a week's worth of work.
>Nope. Everything is a tradeoff. Recovery time is always traded against
>> 2. After a disaster, how long can a recovery take?
>
>Isn't the goal of a commercial product to be as short as possible?
>
normal update time. If recovery time were the only consideration, things
would be a lot easier.
>> 3. Is it reasonable to assume a skilled DBA is availableI do. My goal is to reduce the expertise required to care and feed for
>> during recovery to resolve problems?
>
>If there isn't one then they probably don't have a database over 500MB. Who
>cares?
>
a database, not provide for full employment for DBAs. If you want
performance in Oracle, learn placement control. If you want both
performance and a life, use InterBase.
>Sorry about that. If the problem was a goof, the roll forward has to
>> 6. When recoverying from a journal, what information would a
>> DBA logically require to find the stopping point?
>
>Again, your biases are leaking in here.
>
stop before the goof. The problem is that you generally don't know
exactly who did what. Unless there is some clue as to when a transaction
started or what it did, the problem is unsolvable. (Hint: there are
2 bits per transaction and four states. How many bits are left over
to record start time?).
Jim Starkey