Subject | Re: Problems after many records deleted |
---|---|
Author | Alexander V.Nevsky |
Post date | 2004-05-13T17:54:37Z |
--- In firebird-support@yahoogroups.com, "fabiano_bonin"
<fabiano@p...> wrote:
MCGA is garbage and everybody recall this only stepping on this rake
temselves, not when planning data structures and access modes :)
database:
Before performing mass delete or mass update of indexed columns drop
indices and reference integrity constraints on this table, especialy
those which have many duplicates. You can make indices inactive, but
drop index works much faster. Make such mass changes in exclusive
access. After commit start another transaction and make select
count(*) from this table - this will clean data pages from garbage.
Recreate indices and constrains.
This can speed up whole operation, depending on quality of indices
and amount of records, up to 1000 times and even more. Most difficult
for FB is garbage collection in non-unique indices, recreation of the
same indices is much faster.
If you don't want to use the same instance of database having in
mind backup and restore, you can skip all this precausions and simply
perform backup just after commit changes using -g option (no garbage
collection). As usual, sense of duty forces me to remember - never
restore overwriting source database, surely get new instance and only
after this replace source one.
Best regards,
Alexander.
<fabiano@p...> wrote:
> I had a table with 200.000 records.Fabiano, 5 minutes is really "never"? :)
>
> I deleted 180.000 records from this table.
>
> Now i'm trying to backup my database and the gbak stop in this table
> and never go away.
>
> I didn't wait so much (about 5 minutes), but other tables are almost
> instantly.
> I remember i read here there is some problem with garbage collectionInteresting virtue of human psychology - the only real problem in
> when deleting many records and this behavior is expected.
MCGA is garbage and everybody recall this only stepping on this rake
temselves, not when planning data structures and access modes :)
> Am i right? If yes, is there some workaround to avoid this delay?To prevent problem having in mind further usage of this instance of
database:
Before performing mass delete or mass update of indexed columns drop
indices and reference integrity constraints on this table, especialy
those which have many duplicates. You can make indices inactive, but
drop index works much faster. Make such mass changes in exclusive
access. After commit start another transaction and make select
count(*) from this table - this will clean data pages from garbage.
Recreate indices and constrains.
This can speed up whole operation, depending on quality of indices
and amount of records, up to 1000 times and even more. Most difficult
for FB is garbage collection in non-unique indices, recreation of the
same indices is much faster.
If you don't want to use the same instance of database having in
mind backup and restore, you can skip all this precausions and simply
perform backup just after commit changes using -g option (no garbage
collection). As usual, sense of duty forces me to remember - never
restore overwriting source database, surely get new instance and only
after this replace source one.
Best regards,
Alexander.