Subject Re: [firebird-support] select on deleted data - bad performance
Author Helen Borrie
At 19:38 10/09/2008, you wrote:
>Hello again,
>
>I'm having some trouble here. I've got a few tables being filled up with data periodically. Some procedures then process the data and another application exports it to a disk file - My application then deletes the table (delete from tablename).
>Meanwhile some other procedures and frontend applications are checking the table and it's recordcount (select * / count(*) from tablename) and, to my surprise, this is incredible slow.
>Recordcount of the table is ~60.000 records, each containing 10 numerics. Usually not that much data to cause trouble. I played a bit around and tested some statements:
>
>deletion of all ~60.000 records - ~500 msec

60,000 records are flagged as deleted.

>select count(*) / * from table (table is completely empty after deletion) ~ 5 minutes !!

visits every row...deleted record versions are flagged as ready for garbage collection. (This is a tactic sometimes done deliberately to speed up house-cleaning on Classic, which does cooperative garbage collection only, and does not have a background GC process as Superserver does.)


>After select * or count(*) once is finished (after 5 minutes ^^), every following statement will execute normally ( few msecs ). Table is indexed (primary key on integer column).

Index is irrelevant for select * or count().


>Whats happening here? Looking at the task manager theres no significant CPU load. The fb client seems to do nothing during execution (load of client process ~ 5% / ~20% other / ~75% idle)


With cooperative GC the next query on the table following the committing of a bulk delete or update is going to get the load of garbage collection. So, insofar as the client is the requestor of that query, the client is "involved". But it doesn't make the client use more CPU time: just that that particular client is the once that has to wait or be slowed down because the server is performing that GC within that client's transaction context.

The client is not going to do much while the server engine is busy cleaning up.

>Sys specs:
>Athlon64 3400+, 1g RAM, 2x7200rpm SATA Disks @Raid0, Win2000
>Running Firebird 2.1 classic server

>Any suggestions?

Well, if you *want* this cleanup, by all means perform this count(*) [or select * if that's what you want to do] after the bulk delete. Consider timing the bulk delete and the ensuing clean-up it so that it is performed at a convenient time by some admin-only user on a connection that is not concerned about waiting.

If the idea is to completely empty the table with exclusive access (no user can start inserting new rows until the old records have all been cleared out) then consider dropping the table instead of deleting the rows.

./heLen