Subject Re: [Firebird-general] Re: Snapshot Isolation in A Critique of ANSI SQL Isolation Levels
Author Ann W. Harrison
On 10/19/2010 3:21 PM, unordained wrote:
>>> I think some database constraints would need to be marked as "always
>>> check on commit" because there's no clear trigger to invalidate them.
>> You may, but I haven't found a case that convinces me yet.

I'm not arguing that deferred constraints aren't needed at all, just
that an efficient way to enforce most deferred constraints is to
evaluate them at runtime (when the data is likely to be in cache),
allow the transaction to continue after a constraint failure with
a "black mark" on that particular action, and re-evaluate just the
black marks on commit. If circumstances have changed - the other
transaction failed or some third transaction made a change that
corrected the problem, then the re-validation erases the black mark
and allows the transaction to commit.
> So, if you wanted a rule such as "count(*) on table1 == count(*) on table2",
> would you try to analyze the expression/procedural code that defines the
> constraint, find its dependencies, and check the rule any time any of the objects
> involved in the rule are touched? Or after every statement, no matter the action?

It may be more efficient to evaluate cross-table constraints only at
commit and because any transaction that inserted or deleted records
in those tables has a black mark.
> (In this case, you would need deferred constraints anyway, just to allow a
> transaction to insert into both tables, then verify. But that's not the point --
> the point is that database rules can touch a lot of tables, can be complex
> queries, etc. and would be difficult to analyze properly ahead of time to avoid
> useless validations, and would be inefficient to revalidate constantly.)

On the other hand, many constraints are simple and a runtime check that
succeeds will avoid a costly re-evaluation of the state of the world at

> If you're looking at uncommitted data from other transactions, you have to run
> the rule twice: once assuming the other transaction will commit (your data must
> be able to stand next to upcoming data) and once assuming it will rollback (your
> data must be able to stand on its own.) I'm guessing your logic would be:
> after any change:
> if (not okay locally) then reject change;

No. If not OK locally, mark change as suspicious. Something may happen
later that makes it OK.

> else if (not okay globally) then mark change as suspicious;
> else accept change;
> on commit:
> for (every suspicious change)
> if (not okay remotely) then reject commit;
> accept commit;
> (locally == within current transaction only)
> (globally == within current + recently-committed + concurrent transactions)
> (remotely == within current + recently-committed)

With the one correction, I think that works. If the constraint is
an upper limit, uncommitted deletes (or updates that remove the record
from the constrained group) can't be added to make the total lower
since the operations could rollback and you don't want a situation
where a rollback would fail.