Subject | Re: [firebird-support] Select count(*) using pk |
---|---|
Author | Ann W. Harrison |
Post date | 2005-02-09T21:25:05Z |
dirknaudts wrote:
reading the table in many cases. Firebird tables are multi-generational
- a single record can exist in several versions and records that have
been deleted may not have been expunged from the table if they're still
valid for some active transaction.
From the point of view of any single transaction a record may be too
new to be valid, valid, valid but the wrong version, or no longer valid.
The only way to screen them is to look at the actual records. If you
have to look at each record, the fastest method is to read the data
pages in order.
And no, it wouldn't be a great idea to put the transaction information
in the index to make select count (*) faster because it would double or
triple the size of indexes - increasing the size of each entry and
increasing the number of entries.
Some applications really need the number of records in large volatile
tables. One way to get that is with an external counter table. The
counter table consists of two fields, record_count and table_name. Put
before insert and before delete triggers on the tables to be counted
that store the table name and 1 for inserts or -1 for deletes. From
time to time, run a procedure that adds up all the counter records for a
table, deletes them, and stores the new total in the counter table.
Regards,
Ann
>That's right. Using the primary key would in fact be slower than
>
> It seems that doing a select count(*) from mytable, where mytable has
> a pk, FB 1.5.1 doesn't use that PK to speed up the count.
>
reading the table in many cases. Firebird tables are multi-generational
- a single record can exist in several versions and records that have
been deleted may not have been expunged from the table if they're still
valid for some active transaction.
From the point of view of any single transaction a record may be too
new to be valid, valid, valid but the wrong version, or no longer valid.
The only way to screen them is to look at the actual records. If you
have to look at each record, the fastest method is to read the data
pages in order.
And no, it wouldn't be a great idea to put the transaction information
in the index to make select count (*) faster because it would double or
triple the size of indexes - increasing the size of each entry and
increasing the number of entries.
Some applications really need the number of records in large volatile
tables. One way to get that is with an external counter table. The
counter table consists of two fields, record_count and table_name. Put
before insert and before delete triggers on the tables to be counted
that store the table name and 1 for inserts or -1 for deletes. From
time to time, run a procedure that adds up all the counter records for a
table, deletes them, and stores the new total in the counter table.
Regards,
Ann