Subject | Re: [ib-support] Re: FB slow |
---|---|
Author | Adrian Roman |
Post date | 2002-07-14T20:15:53Z |
> Nonsense. It is slow when asked to update the same 30,000 rows twice in aI would understand to be slower than the first update. But not THAT slow.
> single transaction. Would you write an application like that? What would
The first update is done in 2-3 seconds, the second keeps cpu% up at 99% for
10 minutes.
And it's a PIV 1.9 Ghz, with 512 M RDRAM!
That's too much, whatever IB/FB does with those records. Besides that, they
should be cached (at least in part) from the first update.
> That's updating 30,000 rows, then going back and updating the same 30,000No matter what it does, it shouldn't take that long. It's 200 times slower
> rows again. The second time through it has to check each of the original
> rows as well, to determine whether any other transactions have updates
> pending on them...
than the first update.
If I understand well, a transaction works like this:
It has a transaction number.When it does an update, it looks at the record's
transaction number. If it has a number less or equal with the current
number, it updates it, calculates delta between original data and current
one, writes the delta somwhere else, pointing to it from the current record,
so a transaction which requires older data would be able to follow it to
reconstruct the old record if need be.
A transaction with a number less than the current transaction number of the
record cannot update the record. It might wait to see if the transaction is
rolled back, if the one that updated the current record is still active.
Nevertheless, it was only one active transaction at that time.
The real reason for going back to deltas is because actually new deltas are
created, due the new update. Interbase might merge the two deltas in a
single one, to create a single delta between the original record and the
updated one. It would make sense. But it doesn't make sense to take that
long.
Adrian Roman