Subject | Re: [firebird-support] Re: Oldest Snapshot slowing down DB |
---|---|
Author | Ann W. Harrison |
Post date | 2006-02-08T19:18:23Z |
imoffskating wrote:
between the oldest snapshot and the next transaction would help some,
but the likely problem is long duplicate chains in indexes.
100% CPU utilization often indicates that you've got a problem with
index garbage collection. The solution is Version 2 which has a new
index format that changes the cost of removing entries from a long
index chain from approximate n squared to approximately 1/2 n, where
n is the number of duplicates. Version 2 isn't released yet, but there
are workarounds.
The first step is to determine whether you have long duplicate
chains on your indexes. The tool for that is gstat. gstat is a
command line tool that must be run on the system with the database.
Run gstat with the -a switch and pipe the output to a file. Open
the file with a text editor and look for the string
max dup:
if the value following that string is greater than 30,000, you've
got an index that may be causing you problems. It may be an index
you defined, or it may be an index that Firebird created for you
as part of a foreign key definition.
The workaround in version 1.x depends on the type of index:
For user defined indexes, drop the index and create a replacement
multi-key index that includes the fields in the old index plus the
primary key of the table as a last entry.
For indexes created as part of foreign key relationships, the solution
is to drop some foreign keys and emulate any behavior with triggers.
Regards,
Ann
> Thank you for your reply,No, changing the sweep interval won't help. Reducing the difference
> yes firebird does seem to hog 100% of the cpu, and restarting firebird
> sorts everything out resetting oldest snapshot back to the same as
> oldest transaction.
> Do you think turning sweep interval on would help? and how do i do this?
>
between the oldest snapshot and the next transaction would help some,
but the likely problem is long duplicate chains in indexes.
100% CPU utilization often indicates that you've got a problem with
index garbage collection. The solution is Version 2 which has a new
index format that changes the cost of removing entries from a long
index chain from approximate n squared to approximately 1/2 n, where
n is the number of duplicates. Version 2 isn't released yet, but there
are workarounds.
The first step is to determine whether you have long duplicate
chains on your indexes. The tool for that is gstat. gstat is a
command line tool that must be run on the system with the database.
Run gstat with the -a switch and pipe the output to a file. Open
the file with a text editor and look for the string
max dup:
if the value following that string is greater than 30,000, you've
got an index that may be causing you problems. It may be an index
you defined, or it may be an index that Firebird created for you
as part of a foreign key definition.
The workaround in version 1.x depends on the type of index:
For user defined indexes, drop the index and create a replacement
multi-key index that includes the fields in the old index plus the
primary key of the table as a last entry.
For indexes created as part of foreign key relationships, the solution
is to drop some foreign keys and emulate any behavior with triggers.
Regards,
Ann