Subject | Re: Read commited wait |
---|---|
Author | Alexander V.Nevsky |
Post date | 2003-09-01T17:33:08Z |
--- In Firebird-Architect@yahoogroups.com, Nickolay Samofatov
<skidder@b...> wrote:
behaviour is correct, though I have no objections to discuss publicly
your point of view. Your POV is not the truth in last hierarchy - look
at Oracle and MSSQL for example. I believe they are more correct, you
have another opinion. I don't need explanations about how to use
explisit pessimistic locking too just because I use it in proper
circuimstances for 8 years.
about trigger's effects... Should I _explain_ :) how reliable will be
newly suggested pessimistic locking which don't cause triggers firing
if tables which will be affected from this triggers during real update
can be accessed not only by them but directly or from another
triggers?
one or still discuss conceptual clearness? You know, sometimes I too
speak to my clients - thing which you want will consume too much
resources of your hardware and you'll wait responce for hours. Usually
this mean - money he pays is insufficient to return efforts to
implement this properly ;)
spending little time when planning access. IMHO server should can
detect them and raise exception.
require conflictless increments? Counters of events in parallel
streams for example, or summators of this kind. Is it wise or not to
simplify design of such applications risking to break existing ones -
is another question.
update?
transferring control on record after commit one of the concurrent
transactions. Should I explain? :)
fetching. You fetched, concurrent commited, you calculated and wrote.
Voila. Concurrency, Nickolay, concurrency for this tasks, use
read_commited for increments only.
waits? This is lamer's conclusion made from observation on black box,
no more, perhaps another mechanism works but with the same effect.
with 2 transaction in no_rec_version mode we have this just now even
if statement "update something set a=1 where a=2" waits for records
which "a" is updated to 2 from 3 by another statement and not yet
commited.
personally I believe no_rec_version is more correct. Should I remind
that if we make mass update on table even in rec_version and records
on pages visited last will be commited after we start update but
before we visit their page be updated too?
- is it sensible to care about was changes commited in the same
nanosecond when we make update or not if they could be updated and
commited 1000 times since we read them to calculate values? We just
should'nt rely on this and use concurrency or pessimistic locking if
you prefer to spawn superfluos versions.
another. I can't judge who of us is more right :)
compromise between frivolous read_commited and terrible consistency :)
beleive behaviour we have with no_rec_version intensed with FIFO queue
is most reasonable behaviour, but I have doubts if is this wise to
change horses on ford.
We have'nt yet agreed what we recognize as correct implementation :)
Nickolay, I have proposition: stop our duel for 3 days and if anyone
will join discussion - forget it - this will mean all are happy with
current situation (FB behaves own way, not like all other world) and I
must give up to majority with my doubts.
Best regards,
Alexander.
<skidder@b...> wrote:
> Hello, Alexander,around
>
> I already explained why this stuff happens, why it is correct
> behaviour and how to use explicit pessimistic locks to work it
> if really needed. I can explain this again.Nickolay, to be honest I don't need explanations why current
behaviour is correct, though I have no objections to discuss publicly
your point of view. Your POV is not the truth in last hierarchy - look
at Oracle and MSSQL for example. I believe they are more correct, you
have another opinion. I don't need explanations about how to use
explisit pessimistic locking too just because I use it in proper
circuimstances for 8 years.
> Oracle places update lock on a record at least for the period ofproblems
> record values calculation. This creates quite very interesting
> if you try to modify this record in any triggers invoked (possiblytrigger.
> nested) or even better in autonomous transaction spawned from a
> You are going to get a cryptic error message at best.Save Goodness, we can't spawn transactions from trigger. If speak
about trigger's effects... Should I _explain_ :) how reliable will be
newly suggested pessimistic locking which don't cause triggers firing
if tables which will be affected from this triggers during real update
can be accessed not only by them but directly or from another
triggers?
> And this redundant locking inhibits performance.Do we already discuss detailes of implementation and seacrh fastest
one or still discuss conceptual clearness? You know, sometimes I too
speak to my clients - thing which you want will consume too much
resources of your hardware and you'll wait responce for hours. Usually
this mean - money he pays is insufficient to return efforts to
implement this properly ;)
> In addition, you should be aware of this locking because in somecases
> deadlocks may be possible (Oracle tries to avoid them explicitlyWe should be aware of deadlock always and can easily create them
> forbidding you to use locked record, but not always).
spending little time when planning access. IMHO server should can
detect them and raise exception.
> I think that Firebird behaviour is not worse here. Any reasonableIn general - agreed, but don't you see class of applications which
> application has to handle update conflicts and deadlocks anyway.
require conflictless increments? Counters of events in parallel
streams for example, or summators of this kind. Is it wise or not to
simplify design of such applications risking to break existing ones -
is another question.
> MSSQL cannot be compared with Firebird or Oracle in this testbecause
> it is lock-based architecture, not MGA.And what? MGA can't recognize is record locked or not before making
update?
> No. I already explained and even documented that no_rec_version onlythem.
> decreases possibility of update conflict, not really eliminates
> Try to have 2 or 3 waiting transactions. At least N-1 should failwith
> update conflict exception.Nickolay, this mean just FB can't build simple FIFO queue
transferring control on record after commit one of the concurrent
transactions. Should I explain? :)
> > 2. Our transaction waited seeing uncommited changes. Well, in someupdate
> > moment this changes became commited. Why conflict is registered?
>
> Because the data we retrieved and used to calculate values for
> have become invalid.In read_commited this data can be become invalid just in time of
fetching. You fetched, concurrent commited, you calculated and wrote.
Voila. Concurrency, Nickolay, concurrency for this tasks, use
read_commited for increments only.
> Engine doesn't attempt to repeat this operationBut in no_rec_version seems engine detects lock before update and
> or pessimistically lock records for the period of UPDATE execution.
> It returns a error to the user for handling.
waits? This is lamer's conclusion made from observation on black box,
no more, perhaps another mechanism works but with the same effect.
> > Since this moment data is updatable in accordance to definition ofto
> > read_commited isolation level. I predict contradiction - we wanted
> > update one version of record but when our concurrent`s changeswere
> > commited this is another version and our changes can be incorrect.But
> > if we make update like Set a=b and our intention is based onprevious
> > state of this record, in read_commited this state can be outdatedjust
> > after our read and we successfully will make wrong update.concurrency
>
> > To be shure in correctness of such updates we should use
> > isolation level and read data to be updated in this transaction'scontext.
>be
> No. When you write "update something set a=1 where a=2" even on read
> committed transaction you expect that records with "a=3" should not
> modified and set to "a=1".Why if it is commited to the moment of statement's execution? BTW,
with 2 transaction in no_rec_version mode we have this just now even
if statement "update something set a=1 where a=2" waits for records
which "a" is updated to 2 from 3 by another statement and not yet
commited.
> There are only two correct ways to ensurepessimistic
> this while data is dynamically changing. Either use implicit
> locks, hold them for arbitrary long time (if UPDATE statement isSeems Firebird does both ways depending on rec_version parameter and
> complex) and struggle against internal deadlocks like Oracle does.
> Or use optimistic concurrency model and raise exception if world
> changes between the point when we read record and update it.
> Firebird does the former way and this is the best way I think.
personally I believe no_rec_version is more correct. Should I remind
that if we make mass update on table even in rec_version and records
on pages visited last will be commited after we start update but
before we visit their page be updated too?
> > Is it sensible to play with nanoseconds in time of update itself?not
>
> It is. Correctness of operation is above everything. And there are
> nanoseconds, but possibly some seconds because period between updateand
> lock can be really long if waiting on locks is involved.Sorry, I did'nt understood or you speak about another times. I meant
- is it sensible to care about was changes commited in the same
nanosecond when we make update or not if they could be updated and
commited 1000 times since we read them to calculate values? We just
should'nt rely on this and use concurrency or pessimistic locking if
you prefer to spawn superfluos versions.
> > IMO in read_commited wait reasonable person will make onlyincremental
> > updates like Set a=a+b,operation
>
> Wrong. Person should be able to do whatever at this isolation level.
> Reasonable OLTP applications designed for heavy load do all their
> updates in read_committed mode. While it is possible for some UPDATE
> statements to avoid update conflicts by repeating offending
> internally - re-fetch record, re-evaluate search condition likethe
> explicit lock statement does plus re-calculate update values. But
> latter may produce unwanted side-effects. Applications do not expectyou
> that triggers may normally fire multiple times for the single record
> update operation, for example. Or that Firebird will repeat
> computantionally expensive operations placed inside the UPDATE
> statement multiple times. Thus I should say that it is better to
> leave everything "AS IS" and use explicit pessimistic locking when
> really need to avoid update conflicts as pessimistic locks may notcause
> any of the unwanted side-effects.This is reasonable POV for one class of applcations and not for
another. I can't judge who of us is more right :)
> > and if FB behaviour will be the same asand
> > another servers, this will simplify design of such applications
> > portation of applications between servers.another
>
> No. Conflict resolution is a very implementation-specific area.
> And I also think that Firebird is very consistent in it.
> It uses optimistic locking everywhere unless it is asked to go
> way via isc_tpb_consistency mode and table locking or using explicitNickolay, you subsequentally make view concurrency does'nt exists as
> lock statement and record locking.
compromise between frivolous read_commited and terrible consistency :)
> > 3. If I'm not wrong above, what evil is less - change behaviourand
> > perhaps break some existing application based on it,Can't say I want, as I said we have load of existing applications. I
>
> How exactly do you want to change it ?
beleive behaviour we have with no_rec_version intensed with FIFO queue
is most reasonable behaviour, but I have doubts if is this wise to
change horses on ford.
> All applications should handle deadlocks and update conflicts ifthey
> are expected to be used in multi-user environment and they useoptimistic
> locking model. I do not see a point in support of incorrectimplementations.
We have'nt yet agreed what we recognize as correct implementation :)
Nickolay, I have proposition: stop our duel for 3 days and if anyone
will join discussion - forget it - this will mean all are happy with
current situation (FB behaves own way, not like all other world) and I
must give up to majority with my doubts.
Best regards,
Alexander.