Subject Re: [firebird-support] The age old 'count' issue...
Author unordained
---------- Original Message -----------
From: Myles Wakeham <myles@...>
> With that said, I ran into performance problems with data sets that
> were 100,000 rows or larger in storage. I access all data through
> stored procedures, and in some cases my queries have 2-5 inner joins
> to other tables. When I attempted to get all data, and then split it
> into pages in PHP (my web app dev environment), its just too slow. I
> found out quite a while back that the reason is that I was forcing the
> database to return the entire result set to the PHP server app, so I
> changed my design to allow the passing into the requesting stored
> procedure the SKIP and no of rows returned values, and then
> dynamically build a query to return that data set using FOR EXECUTE
> STATEMENT construct. This works much better and I've seen performance
> increases of 50x as a result.
------- End of Original Message -------

You could try to cache the resultset database-side; keep a table of recent
searches and the results that went with them (delete header and details after
some expiration period.) When you first call the stored procedure, it runs the
query and dumps all the results into the detail table, then returns from there
only the rows requested (first N skip M) as you're doing now, plus can
relatively quickly get the row count. On subsequent calls, it could match the
query parameters to current entries in the recent-searches table; that'll match
that user's previous search, and possibly from other users (if desired --
depends on your security model), saving time. It may be less expensive to push
the data from a query directly into a local table than to download the whole
resultset to PHP for parsing. It may be possible to store less than the full
resultset in the database, depending on the joins at query time vs. disk speed
when caching the data.

-Philip