Subject | Re: [Firebird-Architect] Cloud database |
---|---|
Author | Jim Starkey |
Post date | 2008-07-24T14:50:39Z |
unordained wrote:
transactions that were visible when it started. A transaction that has
committed on another node but whose notification hasn't arrive is no
different from a transaction that committed after our transaction has
started. Same logic, same rule.
unique indexes, foreign key relationships, and multiple updates to a
record. There are a variety of ways to handle these, and unfortunately,
some of the more efficient run afoul the SQL rules for error
notification. They still need to be handled, however.
One strategy that I'm partial to is for a type "unique value", generally
the result of a sequence (generator) used to generate primary keys. If
the system knows that the value was from a sequence, it doesn't need to
do the unique check. It does presume, however, that the programmer
didn't lie about the unique value.
handling the errors when the notifications come in. Doesn't quite give
the timely error notification that SQL requires, but much more efficient.
Jim Starkey
President, NimbusDB, Inc.
978 526-1376
> ---------- Original Message -----------That doesn't follow. A transaction only has to see data from committed
> From: Jim Starkey <jstarkey@...>
>
>> If a transaction executes on a single node, there is no issue. But even
>> relaxing this, it boils down to the same principle that governs Firebird
>> transactions, which is that any given transaction sees a stable set of
>> committed transactions. Records that were committed by a transaction
>> that was visibly committed when another transaction starts are visible,
>> otherwise a back version (or nothing) is required.
>>
>> --
>> Jim Starkey
>> President, NimbusDB, Inc.
>> 978 526-1376
>>
>
> A worker node therefore can't do anything with a (new) transaction until it has access to data
> committed by all transactions before that one (implying that each worker node has access to an
> accurate list of transactions committing on other nodes.) That would allow nodes to could keep
> working on existing transactions for which they already have all dependent data, even if they
> haven't gotten any replication messages -- except when it comes time to commit. For a transaction
> to commit, all changes must:
>
> a) be compatible with all data committed or prepared previously (whoever tries to prepare
> first "wins" in any conflicts, they've already been told their data is good-to-go)
> b) be compatible with all data committed previously (if a transaction was prepared, went into
> limbo, and then got rolled back after you prepared, your changes need to work either with or
> without that transaction's changes, otherwise the limbo transaction would be unable to roll back
> because its rollback would now itself cause constraint violations, which is a bad thing.)
>
transactions that were visible when it started. A transaction that has
committed on another node but whose notification hasn't arrive is no
different from a transaction that committed after our transaction has
started. Same logic, same rule.
> Currently, you're achieving this through immediate constraint checking, but that means each nodeAs I said, there are operations that must be effectively serialized --
> does need immediate access to non-committed data from other nodes, with all the locking involved.
> Replication delays would be painful. Every insert would need to know there was no PK/Unique/FK
> violation, place read locks on the referred-to records (FK), etc.; at the very least, you would
> need to know that you've received replicated data for all "relevant" changes -- all changes (even
> uncommitted) to the table being affected, or any tables it depends on. Since the replication itself
> tells you what changes are out there, how would you know?
>
unique indexes, foreign key relationships, and multiple updates to a
record. There are a variety of ways to handle these, and unfortunately,
some of the more efficient run afoul the SQL rules for error
notification. They still need to be handled, however.
One strategy that I'm partial to is for a type "unique value", generally
the result of a sequence (generator) used to generate primary keys. If
the system knows that the value was from a sequence, it doesn't need to
do the unique check. It does presume, however, that the programmer
didn't lie about the unique value.
> Deferred constraints could help with that, but they have their own issues -- you have to know whatThere are interesting strategies involving broadcasting the updates and
> needs to be re-checked at commit (prepare) time, and then you need to have access to all that info
> (see above) to make a decision. And you still want to serialize the actual commits. A commit
> (prepare) would have to prevent other nodes from committing (implying some sort of centralized
> transaction manager -- a separate TM known by all nodes, not just the client and the servers it's
> connected directly to), wait for all required replication data to arrive (all committed and
> prepared transactions up until this one), check the transaction's changes against the others (PK
> violations, etc.), then prepare, have the TM release the lock on commits, and then go about your
> merry way (and start sending out replication data) ...
>
handling the errors when the notifications come in. Doesn't quite give
the timely error notification that SQL requires, but much more efficient.
> Or maybe I just need more caffeine. That happens.--
>
> -
Jim Starkey
President, NimbusDB, Inc.
978 526-1376