Subject Re: [firebird-support] Re: Update: CPU Pegged on Firebird v2
Author Helen Borrie
At 07:35 AM 14/12/2006, Alexandre Benson Smith wrote:

> Oldest active 453
> Oldest snapshot 453
> Next transaction 358860
>
>
>You have a big gap in transactions, meaning that you have an open
>transaction for a long time.

True.


>Your sweep interval is set to 0 wich means that the garbage collection
>thread will not start automatically on FB 1.5,

Not true. It means that the sweeper will never operate
automatically. Garbage collection could potentially be going on at
any time, if its monitoring detects there is anything to collect.

>you should run gfix to sweep the database manually.

True, this would be a good idea, given statistics like
these. Sweeping can potentially clear out some forms of garbage that
the GC utilities can't touch. But a sweep isn't going to solve the
gap problem if these very old transactions really are still running.


>In FB 2.0 the default garbage collection policy is a combination between
>a back ground thread and a cooperative garbage collecion:
>In FB SS 1.X there is only the background thread.
>in FB 1.X CS only the cooperative GC is available.
>
>The cooperative GC means that each "user" that reads the data pages
>perform the GC as it found it, the cooperative GC tend to maintain the
>garbage in a low level, buut ata cost that one query could be slower as
>it has to clean the garbage it founds.

Correct; but cooperative GC doesn't cause the next user to clean the
whole database, but only the garbage from completed transactions that
involved the table(s) accessed by the next users. After a bulk
delete, for example, this could involve visiting a lot of pages.

>Maybe if you change your firebird.conf for FB 2.0 to use background GC
>you will have the same speed as FB 1.5.

Or not. The usefulness (or not) of a particular GC Policy depends a
lot on the kind of work that is being done. One should test it in an
orderly fashion, which means not in a situation where any GC
mechanism is handicapped by habitually long-running transactions.

>But as said by Sean, your problem relies on having a transaction open
>for a long time.

Even in the restored database, under 1.5, said to be "running
licketty-split", Mr Slalom has managed to build up a gap of more than
365,000 transactions in 24 hours. So, whatever backend the
application is running over, and whatever GCPolicy is set, it's going
to bite again at some point.

As a further comment, this thread has become totally confused. I
seem to recall way back that Mr Slalom told us he had upgraded the
database on moving to Fb 2.0. The header stats suggest
otherwise: that database hadn't been restored since July. So one
continues to wonder about the index statistics as well.

One also has to question why a read-write database with is being run
with no reserve.

>The combined approach tend to lead to a better system performance

It will *help* in an environment where sheer transaction volume means
that background GC isn't keeping up with the volume of collectible
garbage. But it can't cure application design flaws or user
misbehaviour. Where client applications disregard the effects of
concurrency, uncollectible garbage will tend to stay put and
progressively degrade performance over time, regardless of the GCPolicy.

./heLen