Subject | Re: [firebird-support] FB uses 100% cpu |
---|---|
Author | Ann W. Harrison |
Post date | 2005-06-15T18:30:32Z |
Guido Klapperich wrote:
and by the time you see the 100% CPU problem, that transaction is
closed. When it closes, it releases all the garbage it had been
protecting, and the garbage collection starts eating CPU. However,
there's usually another factor when you see that level of CPU
utilization: non-selective indexes. Ordinary garbage collection
imposes some overhead and will show up in performance. Garbage
collection on indexes with lots of duplicates will eat your CPU.
(Check the archives - I've explained this a few times before -
essentially the problem is that duplicates are stored with the most
recent at the front of the list, then removed oldest first. Removing a
record requires reading the whole duplicate chain.)
You can look for indexes with large numbers of duplicates. Run gstat -a
and pipe the output to a file. In that file, look for the string
"max dup: " and values > 10000. Drop those indexes and replace them
with compound indexes that start with the original keys and add a more
unique segment. For example, if you always store bills with the data
paid missing, then update them when the payment arrives, this index is
undesirable:
declare bad index for bills (date_paid);
and should be replaced with one like this:
declare better index for bills (date_paid, customer_id);
The new index format in V2 fixes that problem.
Regards,
Ann
> We are using FB 1.5.1 SS on a NT4 Server machine. We have a problem,There's no good way to figure out what transaction is being left open,
> that FB sometimes keeps using 100% of the cpu and then we have to
> restart FB. I guess, that a transaction is not closed properly, but I
> don't know, how to find out which one one and which user has started the
> transaction. Exists a way to get these information?
and by the time you see the 100% CPU problem, that transaction is
closed. When it closes, it releases all the garbage it had been
protecting, and the garbage collection starts eating CPU. However,
there's usually another factor when you see that level of CPU
utilization: non-selective indexes. Ordinary garbage collection
imposes some overhead and will show up in performance. Garbage
collection on indexes with lots of duplicates will eat your CPU.
(Check the archives - I've explained this a few times before -
essentially the problem is that duplicates are stored with the most
recent at the front of the list, then removed oldest first. Removing a
record requires reading the whole duplicate chain.)
You can look for indexes with large numbers of duplicates. Run gstat -a
and pipe the output to a file. In that file, look for the string
"max dup: " and values > 10000. Drop those indexes and replace them
with compound indexes that start with the original keys and add a more
unique segment. For example, if you always store bills with the data
paid missing, then update them when the payment arrives, this index is
undesirable:
declare bad index for bills (date_paid);
and should be replaced with one like this:
declare better index for bills (date_paid, customer_id);
The new index format in V2 fixes that problem.
Regards,
Ann