Subject | RE: [firebird-support] fb statistic |
---|---|
Author | Helen Borrie |
Post date | 2004-12-22T10:09:36Z |
At 09:36 AM 22/12/2004 +0000, you wrote:
you don't have a lot of very old garbage that is hanging about.
Even a gap of ~26,000 may not indicate anything worse that a read-write
transaction that is not getting committed in a timely manner. Whether this
is "outrageous" really depends on the volume of transactions your system is
putting through.
It's not clear what you mean when you say "CPU usage...always grows to
99%". Could you explain what you observe?
It could be just that your users are doing a lot of deletes and updates and
so the garbage collection thread is getting a lot of work to do. The size
of the gap between oldest active and next transaction otherwise doesn't
have any impact on CPU. Large sorts and complex joins involving large sets
would tend to cause "spikes" in CPU usage. If you are observing the server
process *always* at 99% then something else is going on--possibly something
is stuck in a loop, or a selectable stored proc that is processing a huge
number of rows is being held in suspension by an idle client.
You need to say what server version and model you are using and what
platform it is running on.
./hb
>If my understanding is correct then the sweep interval if set to let's sayThe gap between oldest (interesting) and oldest active is fine. It means
>20,000 would then wait for a gap of this amount between the oldest
>transaction and the oldest active transaction, once this gap is reached
>between these two counters then it would fire a sweep.
>
>If you set a sweep to 0 it will never fire a sweep, you will always have
>to manually fire one off.
>
>I think the only way you are going to get the transactions in line is a
>backup and restore, but more importantly you need to find out where your
>transaction is being held open, as this is what is causing your gap.
>
> The usage of cpu of my FB always grows up to 99% . Here is the
> statistic of my FB .I have set the sweep interval
> to 0 .but the difference between the next transaction and the
> oldest active transaction is very large.
>
> Database header page information:
> Flags 0
> Checksum 12345
> Generation 1662925
> Page size 4096
> ODS version 10.1
> Oldest transaction 1636503
> Oldest active 1636504
> Oldest snapshot 1615008
> Next transaction 1662909
> Bumped transaction 1
> Sequence number 0
> Next attachment ID 0
> Implementation ID 19
> Shadow count 0
> Page buffers 0
> Next header page 0
> Database dialect 3
> Creation date Jun 4, 2004 9:44:42
> Attributes
>
> Variable header data:
> Sweep interval: 0
> *END*
>
you don't have a lot of very old garbage that is hanging about.
Even a gap of ~26,000 may not indicate anything worse that a read-write
transaction that is not getting committed in a timely manner. Whether this
is "outrageous" really depends on the volume of transactions your system is
putting through.
It's not clear what you mean when you say "CPU usage...always grows to
99%". Could you explain what you observe?
It could be just that your users are doing a lot of deletes and updates and
so the garbage collection thread is getting a lot of work to do. The size
of the gap between oldest active and next transaction otherwise doesn't
have any impact on CPU. Large sorts and complex joins involving large sets
would tend to cause "spikes" in CPU usage. If you are observing the server
process *always* at 99% then something else is going on--possibly something
is stuck in a loop, or a selectable stored proc that is processing a huge
number of rows is being held in suspension by an idle client.
You need to say what server version and model you are using and what
platform it is running on.
./hb