Subject Performance with heavily updated table
Author Tim Ward
I'm trying to understand why performance degrades dramatically during
the lifetime of a heavily updated table.

The query we're using as an example is:

SELECT COUNT(*) FROM MYTABLE

After the database has been backed up and restored, this query runs as
expected, the stats from SET STATS ON showing that the number of page
reads is about the same as the number of pages in the table reported by

gstat <db> -t "MYTABLE" -r

After the application has been running for a while, which involves lots
of updates to MYTABLE, the query gets dramatically slower, and the stats
from SET STATS ON show that the number of page reads has gone up by a
factor of 100 or so (the query time has gone from a fraction of a second
to many seconds, the table has around 100k records).

However, the number of pages in the table as reported by gstat is only a
few more, and gstat doesn't appear to be reporting a non-trivial number
of "versions".

So it looks to us like the COUNT(*) on a table scan is reading about 100
times as many pages as it needs to - surely a table scan only needs to
read each page of the table once?

What are we misunderstanding here please?

(At the point where we're re-running and measuring the query the
updating activity has been stopped for a while, so there's no issue with
concurrent accesses to the database.)

Firebird 2.1.5
Dialect 3
Classic
Centos 5.9

Thanks

--
Tim Ward