Subject Re: Read commited wait
Author Alexander V.Nevsky
--- In Firebird-Architect@yahoogroups.com, Nickolay Samofatov
<skidder@b...> wrote:
> Hello, Alexander,
>
> I already explained why this stuff happens, why it is correct
> behaviour and how to use explicit pessimistic locks to work it
around
> if really needed. I can explain this again.

Nickolay, to be honest I don't need explanations why current
behaviour is correct, though I have no objections to discuss publicly
your point of view. Your POV is not the truth in last hierarchy - look
at Oracle and MSSQL for example. I believe they are more correct, you
have another opinion. I don't need explanations about how to use
explisit pessimistic locking too just because I use it in proper
circuimstances for 8 years.

> Oracle places update lock on a record at least for the period of
> record values calculation. This creates quite very interesting
problems
> if you try to modify this record in any triggers invoked (possibly
> nested) or even better in autonomous transaction spawned from a
trigger.
> You are going to get a cryptic error message at best.

Save Goodness, we can't spawn transactions from trigger. If speak
about trigger's effects... Should I _explain_ :) how reliable will be
newly suggested pessimistic locking which don't cause triggers firing
if tables which will be affected from this triggers during real update
can be accessed not only by them but directly or from another
triggers?

> And this redundant locking inhibits performance.

Do we already discuss detailes of implementation and seacrh fastest
one or still discuss conceptual clearness? You know, sometimes I too
speak to my clients - thing which you want will consume too much
resources of your hardware and you'll wait responce for hours. Usually
this mean - money he pays is insufficient to return efforts to
implement this properly ;)

> In addition, you should be aware of this locking because in some
cases
> deadlocks may be possible (Oracle tries to avoid them explicitly
> forbidding you to use locked record, but not always).

We should be aware of deadlock always and can easily create them
spending little time when planning access. IMHO server should can
detect them and raise exception.

> I think that Firebird behaviour is not worse here. Any reasonable
> application has to handle update conflicts and deadlocks anyway.

In general - agreed, but don't you see class of applications which
require conflictless increments? Counters of events in parallel
streams for example, or summators of this kind. Is it wise or not to
simplify design of such applications risking to break existing ones -
is another question.

> MSSQL cannot be compared with Firebird or Oracle in this test
because
> it is lock-based architecture, not MGA.

And what? MGA can't recognize is record locked or not before making
update?

> No. I already explained and even documented that no_rec_version only
> decreases possibility of update conflict, not really eliminates
them.
> Try to have 2 or 3 waiting transactions. At least N-1 should fail
with
> update conflict exception.

Nickolay, this mean just FB can't build simple FIFO queue
transferring control on record after commit one of the concurrent
transactions. Should I explain? :)

> > 2. Our transaction waited seeing uncommited changes. Well, in some
> > moment this changes became commited. Why conflict is registered?
>
> Because the data we retrieved and used to calculate values for
update
> have become invalid.

In read_commited this data can be become invalid just in time of
fetching. You fetched, concurrent commited, you calculated and wrote.
Voila. Concurrency, Nickolay, concurrency for this tasks, use
read_commited for increments only.

> Engine doesn't attempt to repeat this operation
> or pessimistically lock records for the period of UPDATE execution.
> It returns a error to the user for handling.

But in no_rec_version seems engine detects lock before update and
waits? This is lamer's conclusion made from observation on black box,
no more, perhaps another mechanism works but with the same effect.

> > Since this moment data is updatable in accordance to definition of
> > read_commited isolation level. I predict contradiction - we wanted
to
> > update one version of record but when our concurrent`s changes
were
> > commited this is another version and our changes can be incorrect.
But
> > if we make update like Set a=b and our intention is based on
previous
> > state of this record, in read_commited this state can be outdated
just
> > after our read and we successfully will make wrong update.
>
> > To be shure in correctness of such updates we should use
concurrency
> > isolation level and read data to be updated in this transaction's
context.
>
> No. When you write "update something set a=1 where a=2" even on read
> committed transaction you expect that records with "a=3" should not
be
> modified and set to "a=1".

Why if it is commited to the moment of statement's execution? BTW,
with 2 transaction in no_rec_version mode we have this just now even
if statement "update something set a=1 where a=2" waits for records
which "a" is updated to 2 from 3 by another statement and not yet
commited.

> There are only two correct ways to ensure
> this while data is dynamically changing. Either use implicit
pessimistic
> locks, hold them for arbitrary long time (if UPDATE statement is
> complex) and struggle against internal deadlocks like Oracle does.
> Or use optimistic concurrency model and raise exception if world
> changes between the point when we read record and update it.
> Firebird does the former way and this is the best way I think.

Seems Firebird does both ways depending on rec_version parameter and
personally I believe no_rec_version is more correct. Should I remind
that if we make mass update on table even in rec_version and records
on pages visited last will be commited after we start update but
before we visit their page be updated too?

> > Is it sensible to play with nanoseconds in time of update itself?
>
> It is. Correctness of operation is above everything. And there are
not
> nanoseconds, but possibly some seconds because period between update
and
> lock can be really long if waiting on locks is involved.

Sorry, I did'nt understood or you speak about another times. I meant
- is it sensible to care about was changes commited in the same
nanosecond when we make update or not if they could be updated and
commited 1000 times since we read them to calculate values? We just
should'nt rely on this and use concurrency or pessimistic locking if
you prefer to spawn superfluos versions.

> > IMO in read_commited wait reasonable person will make only
incremental
> > updates like Set a=a+b,
>
> Wrong. Person should be able to do whatever at this isolation level.
> Reasonable OLTP applications designed for heavy load do all their
> updates in read_committed mode. While it is possible for some UPDATE
> statements to avoid update conflicts by repeating offending
operation
> internally - re-fetch record, re-evaluate search condition like
> explicit lock statement does plus re-calculate update values. But
the
> latter may produce unwanted side-effects. Applications do not expect
> that triggers may normally fire multiple times for the single record
> update operation, for example. Or that Firebird will repeat
> computantionally expensive operations placed inside the UPDATE
> statement multiple times. Thus I should say that it is better to
> leave everything "AS IS" and use explicit pessimistic locking when
you
> really need to avoid update conflicts as pessimistic locks may not
cause
> any of the unwanted side-effects.

This is reasonable POV for one class of applcations and not for
another. I can't judge who of us is more right :)

> > and if FB behaviour will be the same as
> > another servers, this will simplify design of such applications
and
> > portation of applications between servers.
>
> No. Conflict resolution is a very implementation-specific area.
> And I also think that Firebird is very consistent in it.
> It uses optimistic locking everywhere unless it is asked to go
another
> way via isc_tpb_consistency mode and table locking or using explicit
> lock statement and record locking.

Nickolay, you subsequentally make view concurrency does'nt exists as
compromise between frivolous read_commited and terrible consistency :)

> > 3. If I'm not wrong above, what evil is less - change behaviour
and
> > perhaps break some existing application based on it,
>
> How exactly do you want to change it ?

Can't say I want, as I said we have load of existing applications. I
beleive behaviour we have with no_rec_version intensed with FIFO queue
is most reasonable behaviour, but I have doubts if is this wise to
change horses on ford.

> All applications should handle deadlocks and update conflicts if
they
> are expected to be used in multi-user environment and they use
optimistic
> locking model. I do not see a point in support of incorrect
implementations.

We have'nt yet agreed what we recognize as correct implementation :)
Nickolay, I have proposition: stop our duel for 3 days and if anyone
will join discussion - forget it - this will mean all are happy with
current situation (FB behaves own way, not like all other world) and I
must give up to majority with my doubts.

Best regards,
Alexander.