Subject | RE: [firebird-support] Getting garbage info from the database |
---|---|
Author | Peter Ypenburg |
Post date | 2003-12-18T16:09:45Z |
Hi there,
In the email below there is a couple of times mention of Firebird 1.5. What
I need to know:
- is there an improvement in the way garbage collection works in Firebird
1.5 vs. Firebird 1 (on Linux)?
- is there an improvement in the indexes that gets messed up with mass
deletes in Firebird 1.5?
If I do not go through the trouble of changing source code to go and drop
constraints and just run Firebird 1.5 will there be a magical improvement in
what we are currently have?
Thank you,
Peter Ypenburg
AGM: Product Development
Barloworld Optimus
-----Original Message-----
From: Ann Harrison [mailto:aharrison@...]
Sent: Wednesday, December 03, 2003 10:39 PM
To: firebird-support@yahoogroups.com
Subject: Re: [firebird-support] Getting garbage info from the database
Peter Ypenburg wrote:
is to get no answer, no answer is still better than a wrong answer.
data so that when the
Your description really doesn't give me a clear idea of what's going on
at the database level in terms of inserts, updates, and deletes. As
Helen said, inserts don't create garbage. Deletes don't create much
garbage. Updates, on the other hand, can create lots of garbage. A
record is deleted only once. It can be updated innumerable times, and
if the updates overlap with a long running transaction, innumerable back
versions will be kept in the database and must be garbage collected when
the old transaction finally finishes.
implications about garbage collection. For example, we've recently
noticed that in SuperServer, under load, the garbage collect thread can
be excluded completely by active transaction threads, leading to a New
York City Sanitation Workers Strike situation. Firebird 1.5 reduces
that problem by running the garbage collect thread at the same priority
as transaction threads. A variant of Firebird 1.03 eliminates the
garbage collect thread completely, reverting to the original
co-operative garbage collection mechanism.
A less recent discovery was the effect of long duplicate chains on
garbage collection. In early versions of InterBase, few people put
indexes on fields like "gender." Foreign keys changed that because they
automatically generate indexes and it seems quite reasonable to restrict
"gender" with a foreign key tying it to legal values - "male", "female",
"both", "neither", "unknowable"
I can suggest a couple of strategies. First, look for indexes with
low selectivity - lots of duplicates, and particularly long duplicate
chains. (Gstat -a will find them). Add a segment to those indexes to
make them unique or more nearly unique. For example, if you've got an
index on the field "gender", make it a compound index on "gender", and
"birth date". If you've got a foreign key linking "gender" to a parent
sex table (well, how else did it get to be a parent?), replace it with a
trigger.
If the contents of some tables are completely removed and replaced,
drop and recreate the tables instead of deleting the records
individually. If that's not possible because of dependencies, drop the
indexes on the table before emptying it and put the index back after
it's been reloaded.
Are there long transactions? If so, can they be made read-committed
read-only so they don't inhibit garbage collection?
Try 1.5, or the version of 1.0 without a separate garbage collect
thread.
application.
duplicate chains and a that the back versions of a record all fit on the
same page with the primary version. That way, the garbage collection is
accomplished with the same physical write that changes the primary
data. In the absence of long running transactions co-operative garbage
collection has that characteristic. The garbage collect thread can
have that characteristic too, but it depends on having no long running
transactions and having enough idle time for a lower priority thread to run.
large to fit in cache, so every record removed requires reading
thousands of pages.
is that as the garbage collect thread gets behind, more and more back
versions must be moved off the page with the primary record version.
Traversing multiple pages to find a back version is much much more
expensive than finding it on page.
Regards,
Ann
To unsubscribe from this group, send an email to:
firebird-support-unsubscribe@yahoogroups.com
Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
In the email below there is a couple of times mention of Firebird 1.5. What
I need to know:
- is there an improvement in the way garbage collection works in Firebird
1.5 vs. Firebird 1 (on Linux)?
- is there an improvement in the indexes that gets messed up with mass
deletes in Firebird 1.5?
If I do not go through the trouble of changing source code to go and drop
constraints and just run Firebird 1.5 will there be a magical improvement in
what we are currently have?
Thank you,
Peter Ypenburg
AGM: Product Development
Barloworld Optimus
-----Original Message-----
From: Ann Harrison [mailto:aharrison@...]
Sent: Wednesday, December 03, 2003 10:39 PM
To: firebird-support@yahoogroups.com
Subject: Re: [firebird-support] Getting garbage info from the database
Peter Ypenburg wrote:
>One of the things we have that is killing us at the moment (we suspect)Ah. I was afraid you were getting bad answers. However irritating it
>is garbage.
>
is to get no answer, no answer is still better than a wrong answer.
>At night we have programs that run and import new data (from host systemslike SAP etc.) and does many, many calculations and manipulations of the
data so that when the
>user connects the next day he could see the results of last nightsClients really can be a nuisance, too bad they're so necessary.
>import and data calculations.
>
>As these processes that run over night (and creates garbage)
>has a limited time window in which to finish. If an overnight batch
>blows out by 6 hours the client is stuffed and furious.
>
Your description really doesn't give me a clear idea of what's going on
at the database level in terms of inserts, updates, and deletes. As
Helen said, inserts don't create garbage. Deletes don't create much
garbage. Updates, on the other hand, can create lots of garbage. A
record is deleted only once. It can be updated innumerable times, and
if the updates overlap with a long running transaction, innumerable back
versions will be kept in the database and must be garbage collected when
the old transaction finally finishes.
>We need help so here are a couple of questions:Not really. We're just beginning to really understand some of the
>
>1.) Can some one point me to a in depth tutorial about garbage in the
>database?
>
implications about garbage collection. For example, we've recently
noticed that in SuperServer, under load, the garbage collect thread can
be excluded completely by active transaction threads, leading to a New
York City Sanitation Workers Strike situation. Firebird 1.5 reduces
that problem by running the garbage collect thread at the same priority
as transaction threads. A variant of Firebird 1.03 eliminates the
garbage collect thread completely, reverting to the original
co-operative garbage collection mechanism.
A less recent discovery was the effect of long duplicate chains on
garbage collection. In early versions of InterBase, few people put
indexes on fields like "gender." Foreign keys changed that because they
automatically generate indexes and it seems quite reasonable to restrict
"gender" with a foreign key tying it to legal values - "male", "female",
"both", "neither", "unknowable"
I can suggest a couple of strategies. First, look for indexes with
low selectivity - lots of duplicates, and particularly long duplicate
chains. (Gstat -a will find them). Add a segment to those indexes to
make them unique or more nearly unique. For example, if you've got an
index on the field "gender", make it a compound index on "gender", and
"birth date". If you've got a foreign key linking "gender" to a parent
sex table (well, how else did it get to be a parent?), replace it with a
trigger.
If the contents of some tables are completely removed and replaced,
drop and recreate the tables instead of deleting the records
individually. If that's not possible because of dependencies, drop the
indexes on the table before emptying it and put the index back after
it's been reloaded.
Are there long transactions? If so, can they be made read-committed
read-only so they don't inhibit garbage collection?
Try 1.5, or the version of 1.0 without a separate garbage collect
thread.
>gstat -a -r, looking for large numbers of back versions.
>2.) How do I find out at any stage what is the state of the amount of
>garbage in the database?
>
>As above, though you can also get an estimate by analyzing your
>3.) How do I find out where (what tables) the garbage is affecting the
>database?
>
application.
>4.) What is the best way to get rid on the garbage when there isThe best way is to insure that you don't have indexes with long
>garbage? We currently fire <select count(*) from TableName> to the
>database to get rid of garbage on a specific table.
>
duplicate chains and a that the back versions of a record all fit on the
same page with the primary version. That way, the garbage collection is
accomplished with the same physical write that changes the primary
data. In the absence of long running transactions co-operative garbage
collection has that characteristic. The garbage collect thread can
have that characteristic too, but it depends on having no long running
transactions and having enough idle time for a lower priority thread to run.
> 5.) Why does garbage sometimes make the database grind to a halt?I think the problem occurs when a duplicate chain in an index is too
>
large to fit in cache, so every record removed requires reading
thousands of pages.
>6.) Sometimes garbage collection is quick (10 min) and sometimes itTwo possible answers. One is the length of duplicate chains. The other
>takes long (12 hours), why?
>
is that as the garbage collect thread gets behind, more and more back
versions must be moved off the page with the primary record version.
Traversing multiple pages to find a back version is much much more
expensive than finding it on page.
Regards,
Ann
To unsubscribe from this group, send an email to:
firebird-support-unsubscribe@yahoogroups.com
Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/