Subject | Re: [ib-support] losing data |
---|---|
Author | Helen Borrie |
Post date | 2003-04-22T11:46:23Z |
At 09:19 PM 22/04/2003 +1000, Geoff Rees wrote:
doesn't become a "candidate" for updating until the requested update is
posted to the server. Once that request is posted, any further request to
update the same row will be denied until the original request is resolved
by either committing the changes or by rolling it back. "First come, first
served" applies to the actual DML request, regardless of which transaction
reads the row first.
If the original DML request is committed, that becomes the new version of
the row. If the other request was in snapshot ("concurrency") isolation,
its request cannot proceed. If this other requested was in read-committed
isolation, it can wait or retry to submit its same request and it will succeed.
If the original DML request is rolled back, then the other request can wait
and retry, and it will succeed, because the latest committed version of the
row is the same as the "delta" that is being held for this transaction.
And, no, you cannot ever have a situation where two transactions can update
different columns of the same record version. This would require "dirty
read", i.e. nil isolation, which IB does not support.
If you have an environment where a high level of concurrent access to the
same rows is required, you can employ a strategy to defeat optimistic
locking, commonly referred to as "a pessimistic row lock". To do this, you
submit a "dummy update" request, such as
update atable
set pkey = pkey (set the primary key to its own value)
This will fail if any other transaction is holding the same row in an
uncommitted update, thus avoiding the situation where a user's edits on a
row would be wasted; and it will succeed if there is no pending
uncommitted work on the row in another transaction. If it succeeds, the
user then has exclusive access to the row until she either commits it or
rolls it back.
You do have to watch out that you handle the dummy update situation in any
triggers affecting a column which you will use for a pessimistic lock. You
can use any column for the lock, but the primary key is usually the best to
use (triggers aside) because users should not be allowed to edit it.
hth
heLen
>Hi,What you are observing is IB's optimistic row-level locking at work. A row
> [using IB6.0x]. We have the situation that often the same record is
> selected by 2 computers at approx the same time. One computer is used to
> update 2 fields, the other computer another 2 different fields. If the
> first computer saves (commits) the records while the other computer is
> still active on the same record, then the data will be lost that was
> entered via the first computer when the second computer commits. (hope
> that makes sense).
>What is the solution? Can you lock certain fields?
doesn't become a "candidate" for updating until the requested update is
posted to the server. Once that request is posted, any further request to
update the same row will be denied until the original request is resolved
by either committing the changes or by rolling it back. "First come, first
served" applies to the actual DML request, regardless of which transaction
reads the row first.
If the original DML request is committed, that becomes the new version of
the row. If the other request was in snapshot ("concurrency") isolation,
its request cannot proceed. If this other requested was in read-committed
isolation, it can wait or retry to submit its same request and it will succeed.
If the original DML request is rolled back, then the other request can wait
and retry, and it will succeed, because the latest committed version of the
row is the same as the "delta" that is being held for this transaction.
And, no, you cannot ever have a situation where two transactions can update
different columns of the same record version. This would require "dirty
read", i.e. nil isolation, which IB does not support.
If you have an environment where a high level of concurrent access to the
same rows is required, you can employ a strategy to defeat optimistic
locking, commonly referred to as "a pessimistic row lock". To do this, you
submit a "dummy update" request, such as
update atable
set pkey = pkey (set the primary key to its own value)
This will fail if any other transaction is holding the same row in an
uncommitted update, thus avoiding the situation where a user's edits on a
row would be wasted; and it will succeed if there is no pending
uncommitted work on the row in another transaction. If it succeeds, the
user then has exclusive access to the row until she either commits it or
rolls it back.
You do have to watch out that you handle the dummy update situation in any
triggers affecting a column which you will use for a pessimistic lock. You
can use any column for the lock, but the primary key is usually the best to
use (triggers aside) because users should not be allowed to edit it.
hth
heLen