Subject | Re: [firebird-support] Serializing concurrent updates |
---|---|
Author | Ann W. Harrison |
Post date | 2010-02-18T19:46:58Z |
partsi wrote:
duplicate key error, rollback your transaction and restart it.
Somewhere under the covers Firebird has a way of locking a table that
allows one writer and n readers (called Protected access) but it's
not surfaced in the SQL interface. Besides, it has almost exactly
the same effect as trying, failing, rolling back, and trying again.
As a further elaboration, despite what you may have heard, repeatable
read, as implemented by Firebird, is not "optimistic concurrency
control". Under optimistic concurrency control, a transaction runs
without checking for conflicts, but keeping a list of changes it
has made - and in some implementations also records it has read.
On commit, the database system compares that transactions changes
with changes made by concurrent transactions and decides whether
it succeeded or failed. Optimistic concurrency determines the
outcome at the end of the transaction.
When a Firebird transaction - repeatable read or read committed -
finds a conflict, it stops and waits for the conflicting
transaction to complete. The conflict can be an update conflict,
meaning that a transaction has attempted to change a record when it
cannot read the most recent version, or it can be a unique constraint
conflict. In either case, the transaction that sees the conflict
waits until the previous update transaction finishes. If that other
transaction commits, the waiter gets an error and must roll back
and retry in a new transaction. Pessimistic concurrency control
assumes that if there's the appearance of a problem during an
operation, it's a problem that needs to be investigated.
Good luck,
Ann
>The short answer is, put a unique index on the field. When you get a
> I would have a question regarding serialized updates. We have a table as follows:
>
> CREATE TABLE MyTable ( ID INT NOT NULL, Data INT NOT NULL )
> INSERT INTO MyTable VALUES ( 1, 1 )
>
> There are two types of transactions for the MyTable table. Writer transactions and reader transactions. The writer transactions are very short and they only update values in the table as follows:
>
> UPDATE MyTable
> SET Data = Data + 1
> WHERE ID = 1
> RETURNING Data;
>
> I.e. each writer increments a value by one and determines the incremented value.
>
> The reader transactions read values from the MyTable table. They use optimistic concurrency by using the REPEATABLEREAD isolation transaction.
>
> Application requirements:
>
> R1) Writers cannot block readers (and readers cannot block readers)
> R2) Each writer must update and get a unique value
>
> How to fulfil both of these requirements? By definition, readers utilizing row versioning are not blocked by other readers and writers. So, I think R1 is already fulfilled. What about R2? Can we issue updates in a way that in case of N concurrent writers, each updates and gets a unique value, and the value in the table is incremented by N? Does my update already guarantee this if run under READCOMMITTED? And if it doesn't, how to change it and what isolation level to use?
>
duplicate key error, rollback your transaction and restart it.
Somewhere under the covers Firebird has a way of locking a table that
allows one writer and n readers (called Protected access) but it's
not surfaced in the SQL interface. Besides, it has almost exactly
the same effect as trying, failing, rolling back, and trying again.
As a further elaboration, despite what you may have heard, repeatable
read, as implemented by Firebird, is not "optimistic concurrency
control". Under optimistic concurrency control, a transaction runs
without checking for conflicts, but keeping a list of changes it
has made - and in some implementations also records it has read.
On commit, the database system compares that transactions changes
with changes made by concurrent transactions and decides whether
it succeeded or failed. Optimistic concurrency determines the
outcome at the end of the transaction.
When a Firebird transaction - repeatable read or read committed -
finds a conflict, it stops and waits for the conflicting
transaction to complete. The conflict can be an update conflict,
meaning that a transaction has attempted to change a record when it
cannot read the most recent version, or it can be a unique constraint
conflict. In either case, the transaction that sees the conflict
waits until the previous update transaction finishes. If that other
transaction commits, the waiter gets an error and must roll back
and retry in a new transaction. Pessimistic concurrency control
assumes that if there's the appearance of a problem during an
operation, it's a problem that needs to be investigated.
Good luck,
Ann