Subject | Re: One table become very slow |
---|---|
Author | marrtins_dqdp |
Post date | 2008-09-13T07:45:49Z |
Thanks for your post.
--- In firebird-support@yahoogroups.com, Helen Borrie <helebor@...> wrote:
> >total versions = the total combined number of current and garbage
versions of records in the table
> >max versions = the number of record versions on the "worst" record.
>
> But they are all pretty awful - averaging around 3000 versions per
record
I quite don`t understand what is "worst" record and what could be
typical scenarious for those 3000 versions per record?
>
> >Whats wrong about that?
>
> Your table is choked with garbage. Every access to the table,
especially the "worst" records, is ploughing through pages and pages
of garbage to find the current records.
>
> >How could it be fixed/avoided?
>
> Don't keep stopping gbak when it's in the process of
garbage-collecting your problem table!!! It hasn't "hung", it is just
very, very busy. If you terminate gbak, all that GC work will be
rolled back.
Ok, it filled about 2megs in 24 hours :) Will see how far this will go.
>
> Find and fix the application code that is allowing this garbage to
build up. If, as you said, this only began recently, then you can
probably find the source of the problem fairly easily....or maybe you
have a user that is constantly "monitoring" this table by watching it
via a utility program in a read/write transaction...or maybe the
problem goes back the whole 7 years and has only begun to show itself
under the 'combined' gc policy.
It will be difficult to catch the problem because I don`t fully
understand how garbage is built/cleaned up. Recently I added field
that is constantly updated - timestamped. Could it be the problem?
It`s kinda silly... It is fairly basic Apache/PHP application.
Maybe should I change something with page size or buffers? Recenlty I
also did full metadata dump and recreated database "from scratch" - to
FB version 2.1.1 and pumped data back.
>
> Don't use file copy to back up the database and don't let any
OS-based file-copy utilities include it in their purview. Same goes
for any AV program - don't let them touch your database file.
I coppied db file *after* stopping all accessing services and then
Firebird itself - for experiments.
>
> For now, you can have a short-term solution by using Firebird's own
tools to try to reduce the garbage buildup. Either get everyone off
once a week and run gbak or a sweep; or run a weekly
backup-and-restore cycle the -g flag to suppress garbage collection.
This addresses symptoms, not cause, so you've got to find the rogue
application code that is causing this.
Yes, but I`m affraid thats not the option, beaouse as you mentioned -
this addressess symptoms.
>
> >Header:
<SKIP>
> >
> This is the same header output you showed us two days ago. It looks
like one that was done immediately after a gbak or sweep or, indeed,
just after some user had just copped a successful cooperative GC and
it is not consistent with your detail stats for that CLIENTS table.
Run gstat -h when the problem is showing itself - that will tell you
what you need to know.
Yes I`ts the same, but I grabbed it after all clients were shut down,
so I suppose nothing big has changed. It was taken two days ago, not
after gbak or sweep. However I had run gbak and stopped after couldn`t
wait for it to finish. Should I gstat again?
>
> And make a point of running the detailed stats on this table over
days, between the housekeepings, to figure out how fast this garbage
is accumulating.
Thank you very much for the hints.