Subject Re: Best way to delete millions of rows
Author
Hey
I have finally had some more time to look into the problem I have been having, and have isolated and reproduced the problem. The cause of the problem seams to be a Varchar(36) ID column containing a guid/uuid.

Steps to reproduce:
1) Create a table(TEST) with a single varchar(36) column named ID (not null).   
2) Load the table with data 4000000 rows using this: (select lower( uuid_to_char(gen_uuid())) from rdb$database);

3) Creat a primary key on the ID column.
4) Delete all rows in "TEST" with delete from test and commit
5) from another connection issue "select count(*) from test"

This will cause the database server to become unstable and hang for minutes at a time, where no new connections is made and no other queries get processed. 
To observe this behavior, i wrote a small program that would select current_timestamp from the server(select current_timestamp from mon$database) and print how long time it took to the console. Then sleep for 500ms and then select the time time again. This loop ran during step 4 and 5. 

I am unsure if there is a solution to this problem, or should I just create it as a bug in the tracker?

//Thomas