Subject SV: [firebird-support] Performance degradation in Firebird
Author Bergbom Staffan
Thanks alot for your suggestions


-----Ursprungligt meddelande-----
[] För Pavel Cisar
Skickat: den 5 oktober 2005 14:24
Ämne: Re: [firebird-support] Performance degradation in Firebird


sbergbom wrote:
> I wonder if someone has registered performance degradation when
> adding records to a Firebird_Db
> In my case I have a stored procedure entered from a database-app,
> using dbExpess, with a TIdTCPServer-component.
> The stored procedure makes a few lookups, updates and adds records
> to a table used as a history-log.
> Registering, in a log-file, the time to enter a record revealed a
> registration-time starting from about 60 ms
> increasing to about 100 ms at 300 records added, 500 ms at 1600
> records and almost 1000 ms at 2400 recors added.
> For every 100 records added, all records were deleted from the
> history-log-table.
> Is there anyone that can give me a hint on what this is caused by?

The problem is in undo log that Firebird have to maintain for operations
performed by your stored procedure. This undo log is necessary to assure
the atomicity of single SQL command (procedure invocation), so when your
procedure fails, all his changes are undone. The more changes your SP
does, the larger is the undo log overhead.

Each SQL statement that have to be atomic has it's own undo log, that is
merged with undo log of caller's undo log (SP can call DML or other SP,
triggers can make changes that fire other triggers etc.). That also adds
to the overhead. The deeper the callers chain is, then worse, and the
larger the undo log gets, then bigger performance degradation you would see.

To make it more complex, separate undo logs are also maintained for
individual WHEN DO blocks and for transaction save points.

To make it more interesting, the undo log is also maintained at the
topmost, transaction level. Well, when this top level log gets too big,
it's dropped as it's not really necessary at transaction level (MGA
would handle it), it's only an optimization device to spare some GC
hurdles. When you know that number of changes is going to be big, you
can save the effort FB does with transaction level undo log to tell him
no to even start it (by appropriate transaction parameter, see doc for

All listed above is not a problem in interactive transactions, but can
bite you hardly in batch data processing (as you found by hard way). So,
if you need max. throughput, you should follow next advices (in the
importance order):

1. Keep nesting level of your operations low (no SP's calling other ones
that do changes, too many WHEN DO blocks etc.). Doing all by direct DML
commands executed by application is the best. It's not always possible,
but any level saved is good.

2. Avoid unnecessary multiple changes in the same row.

3. Do not use transaction savepoints in batch operations.

4. Disable transaction-level undo log by isc_tpb_no_auto_undo TPB option.

5. Commit frequently after 100-2000 rows (depends on complexity of your
changes, the more complex then more frequently you should commit).

Hope this helps.

best regards
Pavel Cisar


Visit and click the Resources item
on the main (top) menu. Try Knowledgebase and FAQ links !

Also search the knowledgebases at


Yahoo! Groups Links