Subject Re: [firebird-support] Getting garbage info from the database
Author Ann Harrison
Peter Ypenburg wrote:

>One of the things we have that is killing us at the moment (we suspect)
>is garbage.
>
Ah. I was afraid you were getting bad answers. However irritating it
is to get no answer, no answer is still better than a wrong answer.

>At night we have programs that run and import new data (from host systems like SAP etc.) and does many, many calculations and manipulations of the data so that when the
>user connects the next day he could see the results of last nights
>import and data calculations.
>
>As these processes that run over night (and creates garbage)
>has a limited time window in which to finish. If an overnight batch
>blows out by 6 hours the client is stuffed and furious.
>
Clients really can be a nuisance, too bad they're so necessary.

Your description really doesn't give me a clear idea of what's going on
at the database level in terms of inserts, updates, and deletes. As
Helen said, inserts don't create garbage. Deletes don't create much
garbage. Updates, on the other hand, can create lots of garbage. A
record is deleted only once. It can be updated innumerable times, and
if the updates overlap with a long running transaction, innumerable back
versions will be kept in the database and must be garbage collected when
the old transaction finally finishes.

>We need help so here are a couple of questions:
>
>1.) Can some one point me to a in depth tutorial about garbage in the
>database?
>
Not really. We're just beginning to really understand some of the
implications about garbage collection. For example, we've recently
noticed that in SuperServer, under load, the garbage collect thread can
be excluded completely by active transaction threads, leading to a New
York City Sanitation Workers Strike situation. Firebird 1.5 reduces
that problem by running the garbage collect thread at the same priority
as transaction threads. A variant of Firebird 1.03 eliminates the
garbage collect thread completely, reverting to the original
co-operative garbage collection mechanism.

A less recent discovery was the effect of long duplicate chains on
garbage collection. In early versions of InterBase, few people put
indexes on fields like "gender." Foreign keys changed that because they
automatically generate indexes and it seems quite reasonable to restrict
"gender" with a foreign key tying it to legal values - "male", "female",
"both", "neither", "unknowable"

I can suggest a couple of strategies. First, look for indexes with
low selectivity - lots of duplicates, and particularly long duplicate
chains. (Gstat -a will find them). Add a segment to those indexes to
make them unique or more nearly unique. For example, if you've got an
index on the field "gender", make it a compound index on "gender", and
"birth date". If you've got a foreign key linking "gender" to a parent
sex table (well, how else did it get to be a parent?), replace it with a
trigger.

If the contents of some tables are completely removed and replaced,
drop and recreate the tables instead of deleting the records
individually. If that's not possible because of dependencies, drop the
indexes on the table before emptying it and put the index back after
it's been reloaded.

Are there long transactions? If so, can they be made read-committed
read-only so they don't inhibit garbage collection?

Try 1.5, or the version of 1.0 without a separate garbage collect
thread.

>
>2.) How do I find out at any stage what is the state of the amount of
>garbage in the database?
>
gstat -a -r, looking for large numbers of back versions.

>
>3.) How do I find out where (what tables) the garbage is affecting the
>database?
>
As above, though you can also get an estimate by analyzing your
application.

>4.) What is the best way to get rid on the garbage when there is
>garbage? We currently fire <select count(*) from TableName> to the
>database to get rid of garbage on a specific table.
>
The best way is to insure that you don't have indexes with long
duplicate chains and a that the back versions of a record all fit on the
same page with the primary version. That way, the garbage collection is
accomplished with the same physical write that changes the primary
data. In the absence of long running transactions co-operative garbage
collection has that characteristic. The garbage collect thread can
have that characteristic too, but it depends on having no long running
transactions and having enough idle time for a lower priority thread to run.

> 5.) Why does garbage sometimes make the database grind to a halt?
>
I think the problem occurs when a duplicate chain in an index is too
large to fit in cache, so every record removed requires reading
thousands of pages.

>6.) Sometimes garbage collection is quick (10 min) and sometimes it
>takes long (12 hours), why?
>
Two possible answers. One is the length of duplicate chains. The other
is that as the garbage collect thread gets behind, more and more back
versions must be moved off the page with the primary record version.
Traversing multiple pages to find a back version is much much more
expensive than finding it on page.

Regards,


Ann