Subject Re: Embedded Firebird : Transaction
Author Adam
It might be splitting hairs, but conceptually I have a problem with
committing every n transactions for the sake of performance.

Transactions should be used to define your atomic set of operations.
If you are inserting 10 million records, and the operation is complete
after every insert, then it should be committed after every insert. If
there is a problem inserting the last record which means that some or
all of the inserts need to be rolled back, then it should be inside a
single transaction. It is also about ensuring database consistency,
remembering that new transactions can see what you have committed so
far, and this may not be a consistent state for the database.

<snip from 1.5 release notes>

Internal savepoints

By default, the engine uses an automatic transaction-level system
savepoint to perform transaction
rollback. When you issue a ROLLBACK statement, all changes performed
in this transaction are backed
out via a transaction-level savepoint and the transaction is then
committed. This logic reduces the
amount of garbage collection caused by rolled back transactions.

When the volume of changes performed under a transaction-level
savepoint is getting large (10^4-10^6
records affected) the engine releases the transaction-level savepoint
and uses the TIP mechanism to
roll back the transaction if needed. If you expect the volume of
changes in your transaction to be
large, you can use the TPB flag isc_tpb_no_auto_undo to avoid the
transaction-level savepoint being


That is where the commit every n transactions would have originated.

But if I read this correctly, even if you were to insert 10 million
records in a single transaction, Firebird would release on the
transaction savepoint log.

Out of curiousity, does the client application have to wait around
while the 10000 to 1000000 changes are undone? Obviously from a GC
point of view, it is preferable to reverse all the changes and mark
the transaction as committed, but I imagine flagging the TIP as rolled
back and leaving the garbage for someone else to cleanup would be
nicer in some cases.


--- In, Nando Dessena <nando@d...> wrote:
> Ann,
> >> IP> Unless you also have some triggers that repeatedly update the
same rows,
> >> IP> you do not have to commit every "n" inserts, it will not
speed up anything.
> >>
> >> According to my experience, if the available RAM is exhausted by
> >> Firebird running a very long transaction, splitting it *will* help
> >> performace. I don't know if the OP is inserting thousands or millions
> >> of records, so I took the safer approach.
> AWH> A transaction, even a long transaction that does lots of
inserts should
> AWH> not exhaust RAM. Gbak restores entire databases in a single
> well I guess I need to find the cause; I am pretty sure about the
> symptoms and the cure, though. One huge transaction -> slow; many
> smaller transactions -> fast. Where should I look? Which factor I have
> slipped, if it's not RAM?
> Ciao
> --
> Nando Dessena