|Subject||Re: [IB-Architect] Super-transactions and Incremental Backup|
> Maybe we are trying to get too fancy here.Actually, this would be very difficult to do accurately inside of
> Here is perhaps a too simplistic but maybe a useful approach.
> Problem: How to back up data that has been changed since some arbitrary
> defined date time:
> Solution: Every table has a field which stores a timestamp. The timestamp
> is updated on update or insert. The backup program is able to scan this
> field and back up only those records which fall in a predefined window.
> It won't solve all the problems but it's really easy to set up and may be
> good enough for most peoples needs. Of course this can be done with
> triggers now but it would be neat if the database engine could keep the
> timestamps up for you transparently.
transaction context. Some of the records needing to go to the output might
still be in limbo and get skipped. On a subsequent attempt how would one
know what has been streamed out to backup and what ones were still
uncommitted. The timestamp reflects the time of applying the DML not the
time that the transaction became committed.
I guess my point is you would need more than a timestamp and you would have
to take into consideration that not all transactions are going to be
serialized. In InterBase this just isn't something that you even consider.
I do think you are right that something workable could be arranged on the
outside but that is a job in and of itself...
The thought comes to mind, how expensive would it be to make available the
ability to query up records along with their committed timestamp and USER
who committed it. Kind of like DB_KEY is a system provided mechanism it
might be very useful to take advantage of this. I suppose what would end up
happening is the transaction data that is generated would have to remain a
permanent record in order for this information to be accessed. Please don't
flame, it's just a thought...
CPS - Mesa AZ