Subject | Re: [Firebird-Architect] Read commited wait |
---|---|
Author | Nickolay Samofatov |
Post date | 2003-08-31T12:17:28Z |
Hello, Alexander,
I already explained why this stuff happens, why it is correct
behaviour and how to use explicit pessimistic locks to work it around
if really needed. I can explain this again.
record values calculation. This creates quite very interesting problems
if you try to modify this record in any triggers invoked (possibly
nested) or even better in autonomous transaction spawned from a trigger.
You are going to get a cryptic error message at best.
And this redundant locking inhibits performance.
In addition, you should be aware of this locking because in some cases
deadlocks may be possible (Oracle tries to avoid them explicitly
forbidding you to use locked record, but not always).
I think that Firebird behaviour is not worse here. Any reasonable
application has to handle update conflicts and deadlocks anyway.
MSSQL cannot be compared with Firebird or Oracle in this test because
it is lock-based architecture, not MGA.
decreases possibility of update conflict, not really eliminates them.
Try to have 2 or 3 waiting transactions. At least N-1 should fail with
update conflict exception.
have become invalid. Engine doesn't attempt to repeat this operation
or pessimistically lock records for the period of UPDATE execution.
It returns a error to the user for handling.
committed transaction you expect that records with "a=3" should not be
modified and set to "a=1". There are only two correct ways to ensure
this while data is dynamically changing. Either use implicit pessimistic
locks, hold them for arbitrary long time (if UPDATE statement is
complex) and struggle against internal deadlocks like Oracle does.
Or use optimistic concurrency model and raise exception if world
changes between the point when we read record and update it.
Firebird does the former way and this is the best way I think.
nanoseconds, but possibly some seconds because period between update and
lock can be really long if waiting on locks is involved.
Reasonable OLTP applications designed for heavy load do all their
updates in read_committed mode. While it is possible for some UPDATE
statements to avoid update conflicts by repeating offending operation
internally - re-fetch record, re-evaluate search condition like
explicit lock statement does plus re-calculate update values. But the
latter may produce unwanted side-effects. Applications do not expect
that triggers may normally fire multiple times for the single record
update operation, for example. Or that Firebird will repeat
computantionally expensive operations placed inside the UPDATE
statement multiple times. Thus I should say that it is better to
leave everything "AS IS" and use explicit pessimistic locking when you
really need to avoid update conflicts as pessimistic locks may not cause
any of the unwanted side-effects.
And I also think that Firebird is very consistent in it.
It uses optimistic locking everywhere unless it is asked to go another
way via isc_tpb_consistency mode and table locking or using explicit
lock statement and record locking.
are expected to be used in multi-user environment and they use optimistic
locking model. I do not see a point in support of incorrect implementations.
Nickolay Samofatov
I already explained why this stuff happens, why it is correct
behaviour and how to use explicit pessimistic locks to work it around
if really needed. I can explain this again.
> I tried to discuss this in fb-devel, but seems there is not properOracle places update lock on a record at least for the period of
> time to bother developers who is busy with realease.
> Let's compare behaviour of update in mentioned mode for popular
> RDBMS:
> 1. Application 1 starts transaction and updates some record
> 2. Application 2 starts read_commited rec_version wait
> transaction (for Oracle and MSSQL - read commited transaction and
> wait statement), tries to update the same record and waits.
> 3. Application 1 commits. Application 2:
> IB/FB Oracle MSSQL
> exception perform update perform update
> update conflicts and ready to and ready to
> with concurrent commit commit
> update
record values calculation. This creates quite very interesting problems
if you try to modify this record in any triggers invoked (possibly
nested) or even better in autonomous transaction spawned from a trigger.
You are going to get a cryptic error message at best.
And this redundant locking inhibits performance.
In addition, you should be aware of this locking because in some cases
deadlocks may be possible (Oracle tries to avoid them explicitly
forbidding you to use locked record, but not always).
I think that Firebird behaviour is not worse here. Any reasonable
application has to handle update conflicts and deadlocks anyway.
MSSQL cannot be compared with Firebird or Oracle in this test because
it is lock-based architecture, not MGA.
> note in read_commited no_rec_version wait FB behaviour is the same asNo. I already explained and even documented that no_rec_version only
> Oracle and MSSQL, but this mode is very inconvinient for selects
> within the same transaction.
decreases possibility of update conflict, not really eliminates them.
Try to have 2 or 3 waiting transactions. At least N-1 should fail with
update conflict exception.
> 2. Our transaction waited seeing uncommited changes. Well, in someBecause the data we retrieved and used to calculate values for update
> moment this changes became commited. Why conflict is registered?
have become invalid. Engine doesn't attempt to repeat this operation
or pessimistically lock records for the period of UPDATE execution.
It returns a error to the user for handling.
> Since this moment data is updatable in accordance to definition ofNo. When you write "update something set a=1 where a=2" even on read
> read_commited isolation level. I predict contradiction - we wanted to
> update one version of record but when our concurrent`s changes were
> commited this is another version and our changes can be incorrect. But
> if we make update like Set a=b and our intention is based on previous
> state of this record, in read_commited this state can be outdated just
> after our read and we successfully will make wrong update.
> To be shure in correctness of such updates we should use concurrency
> isolation level and read data to be updated in this transaction's context.
committed transaction you expect that records with "a=3" should not be
modified and set to "a=1". There are only two correct ways to ensure
this while data is dynamically changing. Either use implicit pessimistic
locks, hold them for arbitrary long time (if UPDATE statement is
complex) and struggle against internal deadlocks like Oracle does.
Or use optimistic concurrency model and raise exception if world
changes between the point when we read record and update it.
Firebird does the former way and this is the best way I think.
> Is it sensible to play with nanoseconds in time of update itself?It is. Correctness of operation is above everything. And there are not
nanoseconds, but possibly some seconds because period between update and
lock can be really long if waiting on locks is involved.
> IMO in read_commited wait reasonable person will make only incrementalWrong. Person should be able to do whatever at this isolation level.
> updates like Set a=a+b,
Reasonable OLTP applications designed for heavy load do all their
updates in read_committed mode. While it is possible for some UPDATE
statements to avoid update conflicts by repeating offending operation
internally - re-fetch record, re-evaluate search condition like
explicit lock statement does plus re-calculate update values. But the
latter may produce unwanted side-effects. Applications do not expect
that triggers may normally fire multiple times for the single record
update operation, for example. Or that Firebird will repeat
computantionally expensive operations placed inside the UPDATE
statement multiple times. Thus I should say that it is better to
leave everything "AS IS" and use explicit pessimistic locking when you
really need to avoid update conflicts as pessimistic locks may not cause
any of the unwanted side-effects.
> and if FB behaviour will be the same asNo. Conflict resolution is a very implementation-specific area.
> another servers, this will simplify design of such applications and
> portation of applications between servers.
And I also think that Firebird is very consistent in it.
It uses optimistic locking everywhere unless it is asked to go another
way via isc_tpb_consistency mode and table locking or using explicit
lock statement and record locking.
> 3. If I'm not wrong above, what evil is less - change behaviour andHow exactly do you want to change it ?
> perhaps break some existing application based on it,
> or leave it as is and sacrifice portability and simplicity of designAll applications should handle deadlocks and update conflicts if they
> of some particular applications which requires conflictless incremental
> updates, loosing some part of potential newcomers from Oracle and MSSQL?
are expected to be used in multi-user environment and they use optimistic
locking model. I do not see a point in support of incorrect implementations.
> Alexander.--
Nickolay Samofatov