Subject | Re: [ib-support] FB slow |
---|---|
Author | Ivan Prenosil |
Post date | 2002-07-15T11:13:29Z |
> From: Adrian RomanAt last some interesting problem :-)
> I started an update wich modifies a field for all 30000 records. It takes
> only a couple of seconds. After that, without commiting the transaction, I
> change the update statement to change the same field on all records, but
> with a different value. This can take even 10 minutes to complete.
When you modify the row for the first time (in single transaction),
new record version and delta backversion are created
(and usually stored on the same page if they fit).
This should be reasonably fast.
However, when you update the same row in the same transaction
for the second time (i.e. when new record version already exists),
IB/FB can't simply create another record version with the same transaction id.
Instead, it will update already existing new record version;
but, because each command must be atomic,
i.e. it must update either all (30000 rows in your case) rows or nothing,
it must also be able to _undo_ all changes made by that single command
(e.g. when the update command updates 29999 rows and an error occur
with the last 30000th row, all previous 29999 chages must be
cancelled/undone). To be able to undo any update, engine must keep
previous state of the row in memory.
The problem is that
- row in memory is complete, not just delta
- moreover, no RLE compression is done (as with data stored in cache/disk)
For this reason, second update can have huge memory requirements
(it can easily allocate more RAM than is the size of gdb file).
On weak machines it can e.g. cause swapping.
My suspicion is that there also is similar problem as with cache,
i.e. the more RAM allocated (to support undo function),
the more CPU power is consumed...
Ivan
http://www.volny.cz/iprenosil/interbase