Subject Re: Exponential loss of performance
Author fabiano_bonin
Hi, Aage.

--- In ib-support@yahoogroups.com, Aage Johansen <aagjohan@o...>
wrote:
> On Fri, 9 May 2003 13:09:50 +0000 (UTC), fabiano_bonin
> <fabiano@l...> wrote:
>
> > FB 1.5 RC1 REDHAT 8.0
> > hi, i need some help. i have a table (table1) that have an
update=20
> > trigger that can insert or update another table (table2).
> > this trigger execute 3 queries on table1 itself and 1 query
on=20
> > table2. all queries inside the trigger are based on indexed
fields.
> > table1 have aprox. 70.000 records.
> > if i update 10.000 records of table1, it takes aprox. 10 sec.
> > if i update 20.000 records of table1, it takes aprox. 30 sec.
> > if i update 30.000 records of table1, it takes aprox. 60 sec.
>
>
> Are you updating the same rows every time? Are you sure there is
no "garbage collection" taking place?

Yes. The rows of table2 can be updated many times in the same
operation, but the rows in table1 are updated just once. The
triggers of table1 doesn´t update table1 itself, they just make
queries on it and updates table2.

I did this operation on a fresh database (after backup and restore,
so i can say it's structure was perfect) and i was only user
connected to it, altought i have other users connected to other
databases in the same machine.

> Any difference if you commit and do a "select count(*) from ...
where ..."
> over the updated records between the updates?

No for table1
Yes for table2

I will try to make a reproducible test.

>
> --
> Aage J.