Subject Re: Serializing concurrent updates
Author partsi
Ann, thanks for the info! The short answer i.e. detecting a duplicate key error, rolling back and restarting the transaction is an easy way to go. What is the long answer? :-)

I am wondering there should be a more optimal solution in terms of performance. Is that really the case that a writer cannot increment a value by one and get the incremented value without running under the "sophisticated transaction control" of Firebird? Is there a way to avoid using transactions? For writers, the behavior of a generator is what I am looking for. Just get a next unique value and also store it in a table. The reason why I want the generator value to be stored in a table is because then it is under transaction control for readers. Consider the following sequence of operations in the following order (T stands for transaction):

no transactions: V1 is the current value in MyTable
T1: starts
T2: starts
T2: generates a next unique value, V2, updates the value in MyTable
T2: commits (new transactions see V2 after this, it is ok)
T1: reads value from MyTable, gets V1 (getting V2 would be incorrect)
T1: commits

If using a generator, T1 gets V2 if it reads the current value of the generator, and this is wrong in my case. I hope you understand what I am trying to say.

As for Firebird transactions, the "no wait" option also allows a transaction to immediately fail without waiting for a concurrent update to complete. So waiting is not necessary but it is usually the best bet if the transaction is about to rerun.

Timo