Subject | Re: [firebird-support] Removing 90% of data from 200 mln record table |
---|---|
Author | Aage Johansen |
Post date | 2006-03-25T15:17:42Z |
k_smith_000 wrote:
metadata, and then copy records from the old database into the new one.
You could start with a metadata-only backup. Restoring this gives
you the new (empty) database. Watch out for generator values, and
you might choose to deactive triggers in the new one during transfer.
If you do one big delete (or several small ones) you should probably
make a backup (without garbage collection), and restore this. And
hope that no user issues a "select count(*) ..." before the new
database is ready.
If you make many small deletes you might do a "select count(*) ..."
on the parts where records are deleted (delete+commit+select), but
this could mean the that database is inconsistent until you are finished.
--
Aage J.
> I've a table with a large number (200 000 000) of records. I'd like toOne possibility is to create a new (empty) database with the same
> remove ~90% of records from table that fulfill given criteria.
> I know such operation can take a long time and I should backup and
> restore database after operation has finished.
> My question is, what is the most efficient way of performing such
> operations. Should I put one delete statement into one transaction, or
> divide it into few statements with seperate transactions?
>
metadata, and then copy records from the old database into the new one.
You could start with a metadata-only backup. Restoring this gives
you the new (empty) database. Watch out for generator values, and
you might choose to deactive triggers in the new one during transfer.
If you do one big delete (or several small ones) you should probably
make a backup (without garbage collection), and restore this. And
hope that no user issues a "select count(*) ..." before the new
database is ready.
If you make many small deletes you might do a "select count(*) ..."
on the parts where records are deleted (delete+commit+select), but
this could mean the that database is inconsistent until you are finished.
--
Aage J.