Subject | Re: [firebird-support] Slow SELECT/DELETE |
---|---|
Author | Svein Erling Tysvaer |
Post date | 2007-06-04T21:48:32Z |
pdhpoolplayer1957 wrote:
to return all records? If so, how many records are returned? What about
database statistics? A large gap between the OAT and next transaction
can ruin performance completely. Did you do a large delete just before
you did this select (a select after a delete may have to do the
housekeeping)? Are there many duplicates in your indexes (Firebird 1.5
doesn't handle indexes with lots of duplicates very well - at least not
when considering UPDATE/DELETE)?
One thing that I remember have been recommended (as a workaround)
before, is to delete records in smaller batches more frequently. That
is, delete 1200 records every 5 minutes rather than 10 million records
once a month.
Set
> I have a Firebird database (1.5) that serves as the data store for theWhat's the PLAN? I don't know FlameRobin, is 1.5 hours the time it takes
> IPCheck server monitoring application. We upload the log table
> (IPCLOG) from Firebird to MSSQL so we can perform data analysis and
> reporting. The log table typically contains ~30 million records.
> We've been experiencing extremely slow performance on the upload, so we
> decided to attempt to delete older records prior to the upload.
>
> Using the FlameRobin interface to our Firebird instance, I entered a
> simple query:
>
> SELECT datetime FROM IPCLOG WHERE datetime < '05/01/2007'
>
> This query takes over 1.5 hours to run. There is an index on the
> datetime field, and the statistics have been updated. Is there
> anything else I can do to improve this abysmal performance?
>
> TIA for your time and expertise.
to return all records? If so, how many records are returned? What about
database statistics? A large gap between the OAT and next transaction
can ruin performance completely. Did you do a large delete just before
you did this select (a select after a delete may have to do the
housekeeping)? Are there many duplicates in your indexes (Firebird 1.5
doesn't handle indexes with lots of duplicates very well - at least not
when considering UPDATE/DELETE)?
One thing that I remember have been recommended (as a workaround)
before, is to delete records in smaller batches more frequently. That
is, delete 1200 records every 5 minutes rather than 10 million records
once a month.
Set