Subject Re: Performance of Firebird vs. other DBMS
Author Adam
> Every now and then you will have to "reorganize" the counter table,
> summing up all the +1 and -1 values, else the performance will
> become as bad as that of the 'select count(*)'.

Absolutely, part of the suggestion is that you create a stored
procedure that looks a bit like this

select sum(*) from countertable into :cnt;
delete from countertable;
insert into countertable (cnt) values (:cnt);

The idea is that you put a script in a scheduler or cron. Depending on
the average usage of the table, you can set it to run every 1, 5, 15,
30, 60 minutes or whatever.

If you never run the stored procedure, then it would probably be
slower, but it is a co-operative calculation so the more it runs, the
faster it runs.

> If you do not lock out all other transactions while you do that,
> you might end up with wrong counts.


That is where MGA comes into play. You see, other transactions can not
see the changes that you are making, and the above stored procedure
can not see the changes the other transactions are making. The other
transactions will still reference the old values of the table,
transactions started after the count commits reap the benefit of the
work the stored procedure has done.

It takes a while to get your head around, but the solution does not
lock any inserts, updates or deletes even when in the middle of the sum.

All you have to know is to get the correct count, you run the
following query

select sum(*) from countertable into :cnt;

Even if a few items have been inserted and a few removed, the sum
operation will be very quick.

> I don't buy that.
> The server doesn't have to look at ALL the rows of the table.
> It finds the maximum in the index (logarithmic expensiveness),
> then checks the row to see if that record is visible, if not, gets
> the last but one in the index, checks again, and so on.
> You can't tell me that this is as expensive as a table scan.

I know you have since satisfied yourself of this from other posts.

> If the server is smart, it might even fetch the last couple of
> index entries the first time round.

How many is the last couple? It would depend on the average inserts /
deletes per second and the age of your transaction. Imagine if 40 of
my sites started communicating to my database server at once (which
happens) while a longer transaction was running, the maximum value my
transaction can see may well be 100 records from the bottom.

Hope that helps