Subject re[2]: [firebird-support] Firebird performance degradation
Author Francisco Heker
We gather more info about our problem.
Helen, you were right. We were doing commit_retaining.
We changed it, but didn't solve the problem.
But, we found about the "Sweep interval" parameter.
It wasn't set to any thing in the database.
We set it to 300 but still didn't solve the problem.
We did some tests on a copy of the database. What we saw
is that the transactions commited from application
keep active on the server.
Here is the output of gstat -h
Database header page information:
Flags 0
Checksum 12345
Generation 265688
Page size 1024
ODS version 10.1
Oldest transaction 262517
Oldest active 262518
Oldest snapshot 262518
Next transaction 265520
Bumped transaction 1
Sequence number 0
Next attachment ID 0
Implementation ID 16
Shadow count 0
Page buffers 0
Next header page 0
Database dialect 3
Creation date May 21, 2004 15:46:28
Variable header data:
Sweep interval: 300
As you can see, the "Oldest Active" is far way (3000) from "Next Transaction"
and near to "Oldest transaction". So, the sweep is not triggered.
Why are the transactions being keept active?
We are prety sure that our application is not doing commit_retaining.

> At 03:39 PM 19/12/2005 -0300, you wrote:
>We run a Firebird server (superserver) in a Pentium IV y 1 GB ram with
>Windows 2000.
>There are 10 clients interactig with the server at the same time
>The Firebird server memory consuptions gradually arrises during the day
>untils it becomes too low to operate and the server colapses.
>They start working in the morining and at the afternoon the server
>colapses so it has to be resarted and the story repeats itself...
>Is there something we shoud consider in the Firebird server configuration?

For this problem, *possibly*, if you are using Classic and have set your
page cache too high. However, this behaviour sounds much more like a
system that is not hard-committing any work.

There are two ways to commit work: Commit Retaining and Commit. If you're
not aware of this, you may be using a program interface that supports
"client Autocommit", that is, a Post operation that is combined with a
"soft commit". Since "posting" is a statement-level operation and
"committing" is a transaction-level operation, applications having this
problem gradually hold on to progressively more and more server resources
before collapsing after a few hours.

Delphi components, for example, generally provide AutoCommit transactions
by default. The standard Delphi data access model involves a long-running
transaction that SELECTs sets in read-write transactions and manufactures
I/U/D statements internally, which it both posts and soft-commits in one
hit, in the same logical client "transaction context". It was designed for
non-transactional database engines and it's fatal for transactional engines
unless used with extreme circumspection.

>We programmed our owun replicator that add a trigger to each table that
>logs the sqls (update, isert, delete) that are executed. Those logs are
>used by to replicate.

If my suspicions are true, then the absence of hard commits will cause you
have enormous undo logs building up in memory (from the inserts to the log
tables) and big buildups of resources that, though abandoned by the clients
hours ago, remain in memory awaiting hard commits to release them.

./heLen <