Subject | Re: [firebird-support] Firebird performance degradation |
---|---|
Author | Helen Borrie |
Post date | 2005-12-20T00:36:09Z |
At 03:39 PM 19/12/2005 -0300, you wrote:
page cache too high. However, this behaviour sounds much more like a
system that is not hard-committing any work.
There are two ways to commit work: Commit Retaining and Commit. If you're
not aware of this, you may be using a program interface that supports
"client Autocommit", that is, a Post operation that is combined with a
"soft commit". Since "posting" is a statement-level operation and
"committing" is a transaction-level operation, applications having this
problem gradually hold on to progressively more and more server resources
before collapsing after a few hours.
Delphi components, for example, generally provide AutoCommit transactions
by default. The standard Delphi data access model involves a long-running
transaction that SELECTs sets in read-write transactions and manufactures
I/U/D statements internally, which it both posts and soft-commits in one
hit, in the same logical client "transaction context". It was designed for
non-transactional database engines and it's fatal for transactional engines
unless used with extreme circumspection.
have enormous undo logs building up in memory (from the inserts to the log
tables) and big buildups of resources that, though abandoned by the clients
hours ago, remain in memory awaiting hard commits to release them.
./heLen
>HiFor this problem, *possibly*, if you are using Classic and have set your
>
>We run a Firebird server (superserver) in a Pentium IV y 1 GB ram with
>Windows 2000.
>There are 10 clients interactig with the server at the same time
>The Firebird server memory consuptions gradually arrises during the day
>untils it becomes too low to operate and the server colapses.
>They start working in the morining and at the afternoon the server
>colapses so it has to be resarted and the story repeats itself...
>Is there something we shoud consider in the Firebird server configuration?
page cache too high. However, this behaviour sounds much more like a
system that is not hard-committing any work.
There are two ways to commit work: Commit Retaining and Commit. If you're
not aware of this, you may be using a program interface that supports
"client Autocommit", that is, a Post operation that is combined with a
"soft commit". Since "posting" is a statement-level operation and
"committing" is a transaction-level operation, applications having this
problem gradually hold on to progressively more and more server resources
before collapsing after a few hours.
Delphi components, for example, generally provide AutoCommit transactions
by default. The standard Delphi data access model involves a long-running
transaction that SELECTs sets in read-write transactions and manufactures
I/U/D statements internally, which it both posts and soft-commits in one
hit, in the same logical client "transaction context". It was designed for
non-transactional database engines and it's fatal for transactional engines
unless used with extreme circumspection.
>We programmed our owun replicator that add a trigger to each table thatIf my suspicions are true, then the absence of hard commits will cause you
>logs the sqls (update, isert, delete) that are executed. Those logs are
>used by to replicate.
have enormous undo logs building up in memory (from the inserts to the log
tables) and big buildups of resources that, though abandoned by the clients
hours ago, remain in memory awaiting hard commits to release them.
./heLen