Subject Re: Performance of Firebird vs. other DBMS
Author laurenz_brein
--- In "Ivan Prenosil" wrote:
>>> (
>> Sorry to disagree, and I guess it is also a matter of taste,
>> but I am very opposed to introducing redundancy into a database.
>> It can lead to inconsistencies. This 'good technique' is what I
>> would prefer to call a kludge.
> This technique is not built-in into db server, you have to
> implement it yourself in your database. Other than that, what
> problems do you have with it ?

I seem to have overacted. I guess it just does not look pretty to me.
But you take what you get.

> Also, what inconsistencies do you have in mind ?

Every now and then you will have to "reorganize" the counter table,
summing up all the +1 and -1 values, else the performance will
become as bad as that of the 'select count(*)'.
If you do not lock out all other transactions while you do that,
you might end up with wrong counts.
It is an additional maintenance task.

>>> That means
>>> that a normal index on a field qty could not help you run the
>>> following
>>> select max(qty) from tablea
>> Isn't an index a B*-Tree, so that all you have to do when finding
>> the maximum is to descend from each parent node to the leaf node
>> that contains the largest values?
>> I have always thought that ascending and descending indexes only
>> behave differently (that is, efficiently or inefficiently) when it
>> comes to sorting the result.
> The reason is exactly the same as why Count(*) is expensive in MGA.
> I.e. index can contain much more values than your transaction can
> see.
> If you find maximum in index, server has to verify whether your
> transaction can actually see the record (because the value can
> pertain to row that was inserted but not yet commited, or row that
> was deleted but not yet garbage collected, or row whole value was
> update).
> If the found maximum is not visible to your transaction, servers has
> to find and try other candidate (i.e. it has to navigate through
> index like with sorting).

I don't buy that.
The server doesn't have to look at ALL the rows of the table.
It finds the maximum in the index (logarithmic expensiveness),
then checks the row to see if that record is visible, if not, gets
the last but one in the index, checks again, and so on.

You can't tell me that this is as expensive as a table scan.

If the server is smart, it might even fetch the last couple of
index entries the first time round.

Laurenz Albe