Subject Re: [ib-support] WAIT doesn't work
Author Brad Pepers
On Saturday 06 January 2001 18:11, you wrote:
> At 04:53 PM 06-01-01 -0500, Ann Harrison wrote:
> >I haven't read all the correspondence between you and Helen,
>
> There's a LOT of confusion there... :)) I don't want to add to it, but I'm
> in the process of writing quite a substantial tutorial on transaction
> contexts for IB Objects and I'm finding the beta docs to be rather
> inconsistent on the matter of locking. I'd really like for there to be a
> paper somewhere (IBPhoenix.com or IB_ReallyUseful at IB2K), spelling it out
> as it really is and possibly crystallising Brad's questions in some kind of
> FAQ.

There certainly was but I hope some good can come out of it and a document
that helps others can come out.

> I think we need some kind of reference map showing the relationship between
> the SET TRANSACTION parameters and the parameters that go into the TPB of
> an API function call. The dispersion of documentation of these
> inter-dependent topics around 5 of the 6 manuals moves an already tricky
> issue into the realm of total mystery.

Sure was to me though I think I have a decent grasp of it now!

> >but the transaction mode you want when you're getting the new
> >identifier is protected write, reserving id_table. If you're
> >using the API, call isc_start_transaction, using a transaction
> >parameter block with options of isc_tpb_protected, isc_tpb_lock_write,
> >"ID_TABLE".
>
> In considering the effects of the parameters surfaced by SET TRANSACTION,
> I'd forgotten totally about the protected/shared write implications. I
> think I can get some help by studying Jason's implementation of the TPB
> parameters in the IBO source, since I have IBO routines for multi-user
> updating shared tables without incurring deadlocks, that I don't believe
> need stronger isolation than SNAPSHOT.
>
> I'd assumed I was queuing to get protected write access by grabbing a
> pessimistic lock with SET AFIELD=AFIELD with the WAIT option. This appears
> to work as expected, i.e. as I described to Brad. From the above
> paragraph, I think I must be actually getting more aggressive locking than
> I had previously assumed. In my implementations, it's OK - I'm getting
> what I want. The fact is, I would never want to design a routine like the
> one Brad describes, where every user needs exclusive ROW access (never mind
> actually needing to apply RESERVING TABLE access !!) to a table before
> being able to proceed with DML access to any other table.

Well this is likely a personal choice thing and though you may not like it,
it works extremely well for the application I'm writing. Its also common
enough that I've have email from others doing the same sort of thing. I
really don't see the problem you have with it but whatever!

> >A protected write/reserving transaction acquires all its locks
> >before it begins. In the case of short update transactions with
> >a high conflict rate, this serialization is actually a plus. By
> >avoiding conflicts and rollbacks, it gets better throughput than
> >parallel execution.
>
> As I understand Brad's requirements, he wants to avoid acquiring the
> next_id from the id generator, if a rollback occurs in the succeeding DML
> on the referred table. The only way to achieve that is to include the DML
> in the same transaction, which means the transaction has to hold its
> exclusive table lock until the succeeding DML actually commits or rolls
> back.

Yes that is what I want and what I'm doing now. I don't want the protected
write/reserving thing though since that seems to be a much too course grain
lock.

> >Alternately, you could catch the update conflict error, rollback,
> >and try again.
>
> Brad said he wants to do this all on the server, i.e. he rejected the idea
> of forcing rollback on any exception at all in the succeeding DML. He says
> his model works with SQLAnywhere's locking strategy (which isn't necessary
> a recommendation, but that's another war story!!). I shudder to think
> about the implications of simply assuming things worked on the server side
> when any uncommitted transaction has the potential to hold an unresolved
> serialized block on this table.

Well having been told the server doesn't do it, I've adopted the model that
others have also mentioned in that I catch the deadlock and retry. Its not
as bad as you mention though since I don't have to do a whole rollback. I
just wait a small amount of time and then try the isc_dsql_execute again and
only return an error if I keep getting deadlocks for a long enough period of
time. This has proved to work very well and its also only in the Interbase
driver of my C++ classes so its not a problem. What I didn't want to do is
effect the other databases that work fine the way things are by adding the
retry code at that level.

I'll stay away from the Sybase Anywhere argument except to say that I've seen
both good and bad in each product and there are certainly things in SQL
Anywhere that I think work better than Interbase (and vici-versa too!).

The assumption that two transactions can work on the same data and have it
handled in a sane way (ie: have one wait for the other to finish or return
back an error if its really a deadlock case) seems like a very valid one to
assume of a database and not something to shudder about. As I said, this
assumption is true of any other database I've used so its Interbase thats the
odd case and must be programmed around. Add to this the manual that often
says that setting WAIT will cause one transaction to wait until the other one
is done and its very confusing!

All in all a document that really goes into some depth on transactions,
multi-generational methodology, and what the isolation levels really do in
conflict cases (and a mapping from SET TRANSACTION to isc_start_transaction)
would be a nice thing to have!

--
Brad Pepers
brad@...