Subject | Re: [firebird-support] Isolation level in CHECKs and triggers |
---|---|
Author | Ann Harrison |
Post date | 2003-11-25T20:08:12Z |
Hello all,
Firebird's native transaction mode, formerly called "concurrency"
and now called "snap shot", provides repeatable reads but not full
serializability. For those of you who've been out of school for a
couple of years, serializability is the gold standard of isolation
levels. When a database provides serializable transactions, changes to
the database are always applied as if each transaction ran in
isolation. The order in which they seem to have run is arbitrary, but
you never have two transactions that are each affected by the other's
actions.
Here is a case where snap shot transactions produce a
non-serializable result:
Some transaction: create table foo (f1 integer);
Transaction 1: insert into foo select count (*) from foo;
Transaction 2: insert into foo select count (*) from foo;
Transaction 3: insert into foo select count (*) from foo;
Transaction 4: insert into foo select count (*) from foo;
All transactions: commit;
A serializable system would block Transaction 2 until Transaction 1
finished, then block Transaction 3 until Transaction 2 finishes, etc. At
the end, foo would contain the numbers 0,1,2,3 in that order. Firebird
snapshot transactions do not block and at the end of this sequence foo
contains 0,0,0,0. A lock-based concurrency scheme that relied on read
and write locks would get the same result. To get a serialized result
using lock, you need to lock a mystic "end of table" marker which
literally serializes the transactions. All inserts are serialized
because the system has no way of knowing which fields, if any, should be
duplicate free. The effect on performance is about what you'd expect.
A Firebird snapshot transaction does not see results of concurrent
transactions. A trigger that checks for duplicates won't work because
the trigger sees exactly the same thing as the transaction it runs in.
The duplicates will be invisible to the trigger and the inserts will
succeed. However, a unique constraint does eliminate duplicates.
Triggers and stored procedures run in the "transaction context" of the
process that invoked them. Constraints, however, run in the content of
a system transaction that can "see" all records, committed or not. In
the case above, a unique constraint will cause Transaction 2 to stall
until Transaction 1 completes, then get a unique key violation error if
Transaction 1 commits. If transaction 2 is in snapshot mode, it will
commit or rollback and try the operation again. If transaction 2 is
read-committed, it can just retry the operation since it will see
Transaction 1's committed changes. (Not that I'm advocating
read-committed, it just happens to work in this one odd-ball case.)
Table constraints, field constraints, unique constraints (and unique
indexes), and relational integrity constraints all run in the system
transaction context.
Hope this helps,
Ann
Firebird's native transaction mode, formerly called "concurrency"
and now called "snap shot", provides repeatable reads but not full
serializability. For those of you who've been out of school for a
couple of years, serializability is the gold standard of isolation
levels. When a database provides serializable transactions, changes to
the database are always applied as if each transaction ran in
isolation. The order in which they seem to have run is arbitrary, but
you never have two transactions that are each affected by the other's
actions.
Here is a case where snap shot transactions produce a
non-serializable result:
Some transaction: create table foo (f1 integer);
Transaction 1: insert into foo select count (*) from foo;
Transaction 2: insert into foo select count (*) from foo;
Transaction 3: insert into foo select count (*) from foo;
Transaction 4: insert into foo select count (*) from foo;
All transactions: commit;
A serializable system would block Transaction 2 until Transaction 1
finished, then block Transaction 3 until Transaction 2 finishes, etc. At
the end, foo would contain the numbers 0,1,2,3 in that order. Firebird
snapshot transactions do not block and at the end of this sequence foo
contains 0,0,0,0. A lock-based concurrency scheme that relied on read
and write locks would get the same result. To get a serialized result
using lock, you need to lock a mystic "end of table" marker which
literally serializes the transactions. All inserts are serialized
because the system has no way of knowing which fields, if any, should be
duplicate free. The effect on performance is about what you'd expect.
A Firebird snapshot transaction does not see results of concurrent
transactions. A trigger that checks for duplicates won't work because
the trigger sees exactly the same thing as the transaction it runs in.
The duplicates will be invisible to the trigger and the inserts will
succeed. However, a unique constraint does eliminate duplicates.
Triggers and stored procedures run in the "transaction context" of the
process that invoked them. Constraints, however, run in the content of
a system transaction that can "see" all records, committed or not. In
the case above, a unique constraint will cause Transaction 2 to stall
until Transaction 1 completes, then get a unique key violation error if
Transaction 1 commits. If transaction 2 is in snapshot mode, it will
commit or rollback and try the operation again. If transaction 2 is
read-committed, it can just retry the operation since it will see
Transaction 1's committed changes. (Not that I'm advocating
read-committed, it just happens to work in this one odd-ball case.)
Table constraints, field constraints, unique constraints (and unique
indexes), and relational integrity constraints all run in the system
transaction context.
Hope this helps,
Ann
>
>