Subject Re: update, foreign keys, locking
Author Alexander V.Nevsky
--- In, "unordained"
<unordained_00@c...> wrote:
> I believe in Sugi's question, the intention was that the second
transaction would attempt to update
> the record before the first one had committed -- with the intent
that somehow the deadlock could be
> ignored (not wait for commit/rollback, not time out, but just wait
until commit, then commit
> changes despite the record having been updated during the lifetime
of the second transaction.) I
> think that's a logical problem ... and have no idea why you'd want
to do that.

Philip, let's assume you have $100 on your account. One operator
want to register you payed by check for beer $10, another - you got
check for consultation on programming $50. Is there difference in
result for sequences $100-$10+$50 and $100+$50-$10? Check for positive
reminder should be placed in trigger or in column directly.
Transactions are read_commited and they register
increments-decrements, not new absolute values. So, we can "overwrite"
each other's changes because we do'nt really overwrite - we make
subsequental increments. In nowait mode we'll get lock conflict
(deadlock is another beast, IB error messages incorrectly mix this
ideas) if second transaction tries to write when first did'nt commited
yet, but will not if it commited. Result will be correct. If second
transaction runs in concurrency isolation level, it always will get
conflict in nowait mode and when first transaction commits it's
changes in wait mode. So, concurrency is ideal for absolute
(non-incremental) changes and allows us don't use pessimistic locking
and resolve conflicts if they occures only. Cost is - restart of
transaction in case of conflict, re-fetch data, check differences by
eyes and make decision on own changes. If conflicts are seldom enough
it is more optimal in the sense of server load. But, if operators edit
large text, it is, as you mentioned, very disappointing to re-type
changes and this area is for pessimistic locking.

> However, all of this still assumes that we don't want to save
changes regardless of any other
> changes made by other transactions during our transaction's

"It depends", what I'm trying to explain.

> Sugi's question is still
> open. I don't believe that using an update statement with a +=
operator is going to change FB's
> opinion about what you're attempting to do -- as the first
transaction hasn't committed yet, even
> read-committed will still see the old value of the field, and
attempt to set it to a new value
> based on the old one ...

In read_commited nowait mode we'll get lock conflict and, if nature
of data allows incremental updates, will just repeat attempt without
re-starting transaction, it will be successfull sooner or later
regardless of will our concurrent commit or rollback his changes.
After few attempts we can inform user and he can choice - try again or
he'll make it later. In wait mode it always will be successfull,
perhaps delayed. But in wait mode it's easy to get into real deadlock
- transaction A modified record 1 and waits while transaction B
commited changes of record 2 and transaction B modified record 2 and
waits while transaction A commited changes of record 1. Or wait while
collegue who started editing and went to have dinner without commit -
the same dangers as in pessimistic locking, but without creation of
additional garbage versions of records on dummy updates.

> So, did we just give up on ramdisk stuff again? Is the concensus "we
don't know, but it sounds
> dangerous"?

We know. Just read attentively API Guide and make some simple
experiments to clear obscure points for youself.

Best regards, Alexander.