Subject | Re: Problems after many records deleted |
---|---|
Author | Alexander V.Nevsky |
Post date | 2004-05-13T19:11:11Z |
--- In firebird-support@yahoogroups.com, "fabiano_bonin"
<fabiano@p...> wrote:
years ago) - report on investigation on this subject (I don't give
link because it is in Russian):
table - 9 000 000 records
deleted 4 000 000 - several minutes
select count (garbage collection) having non-unique index - 20 HOURS
the same without this index - 12 minutes
backup/restore with disabled garbage collection - 15 minutes
Example from my prsonal practice. In my database there are several
tables with 5-10 mln records. One of my developers had one time
brilliant idea - he created index on one of this tables on column
which can have only 2 values and amount of both values was
approximatelly equal. At the end of the month ~ 250 000 records in
this table are deleted and inserted. Thank Goodness this was made this
time on Friday - nightly sweep (usualy 10-15 minutes) finished at
Sunday evening...
of us, perhaps instinctively, perhaps because of nature of
applications, seldom do mass deletes. And when occasionally do... :)
parameters, initiates garbage collection if garbage is detected. Gbak,
as any application, performs selects. Mentioned option forces usage of
appropriate parameter for it's transaction to don't make garbage
collection.
Best regards,
Alexander.
<fabiano@p...> wrote:
> > Fabiano, 5 minutes is really "never"? :)Example from rather old article (can't recall exactly, about 3-4
>
> I aborted after this time, but it will take much more. After that i
> did a 'select count(*)' from this table and after 15 minutes i
> hadn't the count (just 20.000 records)
years ago) - report on investigation on this subject (I don't give
link because it is in Russian):
table - 9 000 000 records
deleted 4 000 000 - several minutes
select count (garbage collection) having non-unique index - 20 HOURS
the same without this index - 12 minutes
backup/restore with disabled garbage collection - 15 minutes
Example from my prsonal practice. In my database there are several
tables with 5-10 mln records. One of my developers had one time
brilliant idea - he created index on one of this tables on column
which can have only 2 values and amount of both values was
approximatelly equal. At the end of the month ~ 250 000 records in
this table are deleted and inserted. Thank Goodness this was made this
time on Friday - nightly sweep (usualy 10-15 minutes) finished at
Sunday evening...
> > Interesting virtue of human psychology - the only real problem inDid'nt you noticed ":)"? It is really typical question - majority
> > MCGA is garbage and everybody recall this only stepping on this
> rake
> > temselves, not when planning data structures and access modes :)
>
> Calm down, i not blaming anything. It's just because it's the first
> time it happened to me, after years working with IB/FB, and i'd like
> to know what is happening to avoid it.
of us, perhaps instinctively, perhaps because of nature of
applications, seldom do mass deletes. And when occasionally do... :)
> If i used -g when doing that backup i would not have that delay? DoYes. Any access to data, if it is not disabled in transaction
> you mean gbak makes backup of the garbage, too?
parameters, initiates garbage collection if garbage is detected. Gbak,
as any application, performs selects. Mentioned option forces usage of
appropriate parameter for it's transaction to don't make garbage
collection.
Best regards,
Alexander.