Subject | Re: [Firebird-general] Re: Snapshot Isolation in A Critique of ANSI SQL Isolation Levels |
---|---|
Author | unordained |
Post date | 2010-10-19T18:45:34Z |
---------- Original Message -----------
From: "Ann W. Harrison" <aharrison@...>
... which would allow constraints such as "an invoice can have at most 25 items"
to be enforced even when two transactions simultaneously insert 5 items to an 18-
item invoice, without resorting to locking the invoice record. From neither
perspective is it wrong, yet it is from the global point of view.
If you validate constraints at commit (prepare) time, taking into consideration
everything that's happened in the current transaction, plus all changes from
other transactions that have now committed (or asked to commit), you could
potentially find unexpected violations -- at the time you inserted those new line
items, the total would still have been under 25, but by the time you commit,
another transaction has also inserted and committed before you, and you're no
longer allowed to commit your new items. But if you don't check again because
previous checks (at each item insertion) all passed, you'd miss something
important. I think some database constraints would need to be marked as "always
check on commit" because there's no clear trigger to invalidate them. (Or add a
SP-callable function that allows real triggers to mark constraints as invalidated
in the current transaction and all other running transactions, and trust the
programmer implementing the constraint to know both how to check it, and what
could cause it to invalidate.)
-Philip
From: "Ann W. Harrison" <aharrison@...>
> Actually no, some action must have caused the constraint to be------- End of Original Message -------
> invalidated. As Firebird currently behaves (or did when I last
> looked at it, which wasn't recent) you can have constraint
> violations that aren't caught because they are validated from
> the point of view of the active transaction, but that's independent
> of whether they're validated immediately or on commit. The
> correct implementation (as I told Jim 25 years ago) is to validate
> all constraints from the omniscient view of the server, not just
> uniqueness and referential integrity.
>
> Cheers,
>
> Ann
... which would allow constraints such as "an invoice can have at most 25 items"
to be enforced even when two transactions simultaneously insert 5 items to an 18-
item invoice, without resorting to locking the invoice record. From neither
perspective is it wrong, yet it is from the global point of view.
If you validate constraints at commit (prepare) time, taking into consideration
everything that's happened in the current transaction, plus all changes from
other transactions that have now committed (or asked to commit), you could
potentially find unexpected violations -- at the time you inserted those new line
items, the total would still have been under 25, but by the time you commit,
another transaction has also inserted and committed before you, and you're no
longer allowed to commit your new items. But if you don't check again because
previous checks (at each item insertion) all passed, you'd miss something
important. I think some database constraints would need to be marked as "always
check on commit" because there's no clear trigger to invalidate them. (Or add a
SP-callable function that allows real triggers to mark constraints as invalidated
in the current transaction and all other running transactions, and trust the
programmer implementing the constraint to know both how to check it, and what
could cause it to invalidate.)
-Philip