Subject | Re: Exponential loss of performance |
---|---|
Author | Aage Johansen |
Post date | 2003-05-09T20:47Z |
On Fri, 9 May 2003 13:09:50 +0000 (UTC), fabiano_bonin
<fabiano@...> wrote:
"garbage collection" taking place?
Any difference if you commit and do a "select count(*) from ... where ..."
over the updated records between the updates?
--
Aage J.
<fabiano@...> wrote:
> FB 1.5 RC1 REDHAT 8.0Are you updating the same rows every time? Are you sure there is no
> hi, i need some help. i have a table (table1) that have an update=20
> trigger that can insert or update another table (table2).
> this trigger execute 3 queries on table1 itself and 1 query on=20
> table2. all queries inside the trigger are based on indexed fields.
> table1 have aprox. 70.000 records.
> if i update 10.000 records of table1, it takes aprox. 10 sec.
> if i update 20.000 records of table1, it takes aprox. 30 sec.
> if i update 30.000 records of table1, it takes aprox. 60 sec.
"garbage collection" taking place?
Any difference if you commit and do a "select count(*) from ... where ..."
over the updated records between the updates?
--
Aage J.