Subject Re: [firebird-support] savepoints limit
Author Ann W. Harrison
At 12:08 PM 12/22/2004, kaczy27 wrote:


>I read through doc about savepoints and the only clue I find is the
>limit around 10^4 recoerds affected.
>I don't think that I hit that amount within single transaction.
>How many savepoints can be set without commit. (how many levels deep
>can the transaction be)

Savepoints are an extension of the transaction auto-undo mechanism,
which in turn is an extension of the mechanism that makes triggers,
mass updates (i.e. a searched update or delete), and procedures atomic.
Essentially, starting a transaction, procedure, trigger, or mass update
starts bitmap of record identifiers for records changed in the operation
and a linked list of old values.

The savepoint lists for triggers, procedures, and mass updates can be
of any length. The auto-undo log for a transaction is abandoned if
it gets over 500 records. In that case, the transaction will not
undo its work if it fails, but will rely on the garbage collection
system to removed its changes. There is, as far as I can see, no
limit on the size of a declared save point.

The savepoint lists hang off the transaction block and are a linked
list, so there's no obvious limit to the number that can exist
simultaneously. Each savepoint list has a number, which is a signed
long (32 bit) - that should allow a transaction to create quite a
few save points before anything overflows.


>Can I safely read from records I inserted in current transaction but
>some levels before possible after few partial commits (savepoints
>releases)?

Yes.

>I know I should not update the same record multiple times
>within the same transaction. I am not doing this, but what could be
>the consequences, is it defined as no (if so why there is no
>exception from Firebird) or it only bails out under some
>circumstances (what are those if so).

You can modify a single record multiple times in a transaction -
that shouldn't cause any problems except possibly having a lot
of back versions of the record. The number of back versions
kept depends a lot on how the changes are made - directly or
through nested stored procedures or through triggers. Each
procedure and trigger needs to be able to revert to the state
before it started, while a direct update need only be able to
revert to the state before the transaction started. Multiple
updates of the same record is not terribly good practice -
one particular application interface often made hundreds of
modifications to a single record in a single transaction - it
worked, but slowly.

If you're losing inserts, which I gather you think you are,
we should figure out what's happening. Even if you exceed
some limit, you should get an error, not lost changes.

Regards,


Ann