Subject [firebird-support] Re: Table with more than 100 index.
Author Svein Erling Tysvær
The answer to this is actually written in the link I provided one hour ago as a reply to 'count records of all tables' (although not the part I was referring to). Here's a copy of the most relevant part for you:

"Solution is to have a transaction table which only stores values +1 and -1, and to get the record count you just sum them up. Periodically, you would sum those up and insert as a single record, to keep the table small and fast."

Trying to use this to (theoretically speaking) add another layer that 'optimizes the optimizer' or 'override the optimizer' is a heavy task. Though you might be able to improve your results in many cases. I'm afraid that Ivan talks about the SELECT, since INSERT never use ordering.


-----Original Message-----
From: [] On Behalf Of svanderclock
Sent: 8. desember 2009 10:32
Subject: [firebird-support] Re: Table with more than 100 index.

i was already thing about this, but how you handle the lock conflict ?
if user A update row 1234 so he update also the count in table AUX and if user B update the row 493845 so he also update the count in table AUX => lock problem :(

--- In, "Mon Mariola" <label@...> wrote:
> The only solution to this problem is to maintain an auxiliary table for each
> field in the main table with the fields "value" and "counter" . Use triggers
> on the main table to maintain these tables. Then you can quickly get the
> amount of records that meet each criterion, without having to go all the
> records in the main table and decide from that value.
> Ruben Marti.
> ----- Original Message -----
> From: svanderclock
> To:
> Sent: Monday, December 07, 2009 7:58 PM
> Subject: [firebird-support] Re: Table with more than 100 index.
> > The performance problem of the queries where total result set is
> > 2 million rows is not sorting, but simply retrieving all those
> > rows. I wish that the "first"/"limit" type queries had never
> > been invented and designers were forced to think about the selectivity
> > of queries rather than just "what the heck, select two million and
> > throw out one million nine hundred ninety-nine thousand seven hundred
> > and fifty rows."
> i agree, but if i could know when a query will return 2 000 000 rows
> before to launch it it's will be great ! unfortunatly no way ... without the
> order by everything i perfect but because of the #{^[\| order by it's a hell
> :(
> [Non-text portions of this message have been removed]



Visit and click the Resources item
on the main (top) menu. Try Knowledgebase and FAQ links !

Also search the knowledgebases at

Yahoo! Groups Links