Subject | Re: Hang on disconnect after a batch job |
---|---|
Author | Alexander V.Nevsky |
Post date | 2003-11-03T15:46:09Z |
--- In firebird-support@yahoogroups.com, Alexander Tabakov <saho@e...>
wrote:
which will be recognized as garbage when all transactions which should
be able to see it will be closed.
a) in applications which works with this databases (perhaps long
living transactions were introduced)
b) in database - was'nt there created new indices on updated table(s),
especially low selective ones.
I struggled last week against similar problem. One of my collegues
created index with selectivity 0.5 on 4 million table :) and update or
delete of 10% of the rows caused GC to run for many hours.
Unfortunately I searched problem on copy of the database created
before his brilliant invention, being sure they are identical, and
lost very much time. First look on the indices on original table made
all clear :)
Best regards,
Alexander
wrote:
> Hello Helen,begin
>
> >I think possibly there's something you're not telling us: did you
> >the import operation by deleting a large number of old records? Ifso,
> >then the first transaction on that table *after* the one that didthe
> >deleting will launch the garbage collector.records. But
>
> No I do not delete any records before inserting. What I didn't tell
> you :)) was that there was another batch job updating some of the
> it does not delete records either. Only updates.Alexander, update, the same way as delete, creates record version
which will be recognized as garbage when all transactions which should
be able to see it will be closed.
> This is a routineSo check what was changed recently
> montlhy procedure and it was OK the last several months.
a) in applications which works with this databases (perhaps long
living transactions were introduced)
b) in database - was'nt there created new indices on updated table(s),
especially low selective ones.
I struggled last week against similar problem. One of my collegues
created index with selectivity 0.5 on 4 million table :) and update or
delete of 10% of the rows caused GC to run for many hours.
Unfortunately I searched problem on copy of the database created
before his brilliant invention, being sure they are identical, and
lost very much time. First look on the indices on original table made
all clear :)
Best regards,
Alexander