Subject | RE: [firebird-support] Getting garbage info from the database |
---|---|
Author | Peter Ypenburg |
Post date | 2003-12-04T18:18:27Z |
Hi Helen (and others),
When is your book available?
We use Firebird 1.0.3.972 on Linux and use Delphi 5 (IBX) as our
development tool (for what it's worth).
On ours processes that run at night we keep the size of the transaction
small (about 1000 records at a time), we do however have cases where we
do lots of inserts (committing at every 1000). Those inserts does not
seem to have given us any trouble at all BUT ...
we do bulk deletes and that is obviously in one transaction e.g. (delete
from LEADTIMEANALYSIS where SomeDate < SixMonthOld) this is about 300
000 to 500 000 records at a time and commit that. The next step that we
do is garbage collection and that is done by doing (select count(*) from
LEADTIMEANALYSIS), this the thing that grinds the database to a halt.
If I run: select RDB$INDEX_NAME from RDB$INDICES where RDB$RELATION_NAME
= 'LEADTIMEANALYSIS' then I get back:
RDB$PRIMARY20
RDB$FOREIGN21
RDB$FOREIGN22
RDB$FOREIGN23
RDB$FOREIGN24
Metadata for those indices look like this:
ALTER TABLE LEADTIMEANALYSIS ADD FOREIGN KEY (ITEMNO) REFERENCES
ITEM(ITEMNO);
ALTER TABLE LEADTIMEANALYSIS ADD FOREIGN KEY (SUPPLIERNO) REFERENCES
SUPPLIER(SUPPLIERNO);
ALTER TABLE LEADTIMEANALYSIS ADD FOREIGN KEY (ORDERSUPPLIERNO)
REFERENCES SUPPLIER(SUPPLIERNO);
ALTER TABLE LEADTIMEANALYSIS ADD FOREIGN KEY (DELIVEREDCALENDARNO)
REFERENCES CALENDAR(CALENDARNO);
If I try to run: alter index RDB$PRIMARY20 inactive, I get the following
error:
unsuccessful metadata update MODIFY RBD$INDICESS failed action cancelled
by trigger (3) to preserve data integrity Cannot deactivate primary
index (useless note to Firebird spell checker indices is with one s not
two).
So then I tried to run: alter index RDB$FOREIGN21 inactive and I get the
same error. I suspect the reason is that the foreign key is linked to
the primary key on the other table and you cannot deactivate a primary
index.
QUESTIONS:
With mass deletes I need to take care of indices but how do I
programmatically deactivate the indices with out going to the extend of
dropping and recreating the constraints?
How do I know (if possible) what index is messed up, is there a way to
test or check indices programmatically?
One thing to note is that we have not explicitly named our constraints
but rather let Firebird name them, could this be part of the problem why
I am not able to deactivate the index?
Thank you,
Peter Ypenburg
AGM Product Development
Barloworld Optimus
Life is beautiful.
-----Original Message-----
From: Helen Borrie [mailto:helebor@...]
Sent: 02 December 2003 03:45 PM
To: firebird-support@yahoogroups.com
Subject: Re: [firebird-support] Getting garbage info from the database
At 03:24 PM 2/12/2003 +0200, you wrote:
if
you are inserting and then following that up with a lot of updates and
deletes, you'll get garbage.
the
growth of the file(s).
tables. Garbage is old record versions and index stumps. The garbage
sits
in pages where your active data are stored. If you are using one huge
transaction to do everything, you'll slow things down horribly, because
record version will pile upon record version for all the rows the
transaction works on. "Double-dipping" the same record multiple times
in a
single transaction causes this.
-- break up the inserts into chunks of 7000 - 10,000 and make sure you
use
a hard commit.
-- commit inserts before starting to do DML on the new rows.
Huge deletes and inserts mess up indexes. You could experiment with
setting indexes inactive for the inserts and then active again for the
DML. A big part of your slowdown is probably from unbalanced indexes.
not be all garbage that is slowing things down. Take care of the
indexes.
Without
good transaction management, you inhibit that background GC and so the
first transaction after completion of your job will get hit with a load.
/heLen
To unsubscribe from this group, send an email to:
firebird-support-unsubscribe@yahoogroups.com
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/
When is your book available?
We use Firebird 1.0.3.972 on Linux and use Delphi 5 (IBX) as our
development tool (for what it's worth).
On ours processes that run at night we keep the size of the transaction
small (about 1000 records at a time), we do however have cases where we
do lots of inserts (committing at every 1000). Those inserts does not
seem to have given us any trouble at all BUT ...
we do bulk deletes and that is obviously in one transaction e.g. (delete
from LEADTIMEANALYSIS where SomeDate < SixMonthOld) this is about 300
000 to 500 000 records at a time and commit that. The next step that we
do is garbage collection and that is done by doing (select count(*) from
LEADTIMEANALYSIS), this the thing that grinds the database to a halt.
If I run: select RDB$INDEX_NAME from RDB$INDICES where RDB$RELATION_NAME
= 'LEADTIMEANALYSIS' then I get back:
RDB$PRIMARY20
RDB$FOREIGN21
RDB$FOREIGN22
RDB$FOREIGN23
RDB$FOREIGN24
Metadata for those indices look like this:
ALTER TABLE LEADTIMEANALYSIS ADD FOREIGN KEY (ITEMNO) REFERENCES
ITEM(ITEMNO);
ALTER TABLE LEADTIMEANALYSIS ADD FOREIGN KEY (SUPPLIERNO) REFERENCES
SUPPLIER(SUPPLIERNO);
ALTER TABLE LEADTIMEANALYSIS ADD FOREIGN KEY (ORDERSUPPLIERNO)
REFERENCES SUPPLIER(SUPPLIERNO);
ALTER TABLE LEADTIMEANALYSIS ADD FOREIGN KEY (DELIVEREDCALENDARNO)
REFERENCES CALENDAR(CALENDARNO);
If I try to run: alter index RDB$PRIMARY20 inactive, I get the following
error:
unsuccessful metadata update MODIFY RBD$INDICESS failed action cancelled
by trigger (3) to preserve data integrity Cannot deactivate primary
index (useless note to Firebird spell checker indices is with one s not
two).
So then I tried to run: alter index RDB$FOREIGN21 inactive and I get the
same error. I suspect the reason is that the foreign key is linked to
the primary key on the other table and you cannot deactivate a primary
index.
QUESTIONS:
With mass deletes I need to take care of indices but how do I
programmatically deactivate the indices with out going to the extend of
dropping and recreating the constraints?
How do I know (if possible) what index is messed up, is there a way to
test or check indices programmatically?
One thing to note is that we have not explicitly named our constraints
but rather let Firebird name them, could this be part of the problem why
I am not able to deactivate the index?
Thank you,
Peter Ypenburg
AGM Product Development
Barloworld Optimus
Life is beautiful.
-----Original Message-----
From: Helen Borrie [mailto:helebor@...]
Sent: 02 December 2003 03:45 PM
To: firebird-support@yahoogroups.com
Subject: Re: [firebird-support] Getting garbage info from the database
At 03:24 PM 2/12/2003 +0200, you wrote:
>Hi There,Inserts don't create garbage. But big inserts can mess up indexes. So
>
>We have a client server application that connects to the database and
>allows the user to view and change some data. At night we have programs
>that run and import new data (from host systems like SAP etc) and does
>many, many calculations and manipulations of the data so that when the
>user connects the next day he could see the results of last nights
>import and data calculations. Our application is mission critical to
>the client, the next day he can not do his work until our overnight
>processes are done.
>
>One of the things we have that is killing us at the moment (we suspect)
>is garbage. As these processes that run over night (and creates
>garbage)
if
you are inserting and then following that up with a lot of updates and
deletes, you'll get garbage.
>has a limited time window in which to finish. If an overnight batchIn my forthcoming book. :-)
>blows out by 6 hours the client is stuffed and furious. This makes our
>lives very difficult as we are forever trying to fight fires.
>
>We need help so here are a couple of questions:
>
>1.) Can some one point me to a in depth tutorial about garbage in the
>database?
>You can't, really, except by shutting the database down and looking at
>2.) How do I find out at any stage what is the state of the amount of
>garbage in the database?
the
growth of the file(s).
>If your big batch operation is causing this, then it will be those
>3.) How do I find out where (what tables) the garbage is affecting the
>database?
tables. Garbage is old record versions and index stumps. The garbage
sits
in pages where your active data are stored. If you are using one huge
transaction to do everything, you'll slow things down horribly, because
record version will pile upon record version for all the rows the
transaction works on. "Double-dipping" the same record multiple times
in a
single transaction causes this.
-- break up the inserts into chunks of 7000 - 10,000 and make sure you
use
a hard commit.
-- commit inserts before starting to do DML on the new rows.
Huge deletes and inserts mess up indexes. You could experiment with
setting indexes inactive for the inserts and then active again for the
DML. A big part of your slowdown is probably from unbalanced indexes.
>That will do it.
>4.) What is the best way to get rid on the garbage when there is
>garbage? We currently fire <select count(*) from TableName> to the
>database to get rid of garbage on a specific table.
>Not hard-committing update and delete work often enough. But - it might
>5.) Why does garbage sometimes make the database grind to a halt?
not be all garbage that is slowing things down. Take care of the
indexes.
>Garbage collection goes on all the time in background, if it can.
>6.) Sometimes garbage collection is quick (10 min) and sometimes it
>takes long (12 hours), why?
Without
good transaction management, you inhibit that background GC and so the
first transaction after completion of your job will get hit with a load.
/heLen
To unsubscribe from this group, send an email to:
firebird-support-unsubscribe@yahoogroups.com
Your use of Yahoo! Groups is subject to
http://docs.yahoo.com/info/terms/