Subject one more q about row-level locking techniques
Author Robby
It seems Firebird (as of 1.5) now has two methods of doing row-level
locking....the implicit transaction locking that came from it's IB
heritage, and a new explicit pessimistic locking mechanism ('with
lock' on select calls). I just need some clarification on these two
methods.

My use-case:
Say I have a certain table, foobar, which has a UNIQUE field called
id, which, when given context within a master id called 'memberid',
allows end-users to reference that entry. I am assigning an ID from
this table in a process like "NEW.id = MAX (id) FROM foo WHERE
masterid = :passed_masterid". Of course, I only want one transaction
at once to be even able to GET the results not only the row with the
highest id for that table, but of MAX (id) for that table as
well....Other transactions should of course have to wait to run MAX
(id) and access this data.

(My full reason for needing any kind of locking is described in the
thread at http://groups.yahoo.com/group/firebird-
support/message/44643 . Alexandre was kind enough to get me on the
right track, but I need some claification on just this one more
thing.)

Anyhow, where this gets a bit confusing for me is that I've read the
paper on Auditable Series over at IBObjects, and they use a simple
dummy update kind of method to get a lock on the appropiate data
until the transaction ends. They say that this will prevent other
transactions from accessing the data (a "planned deadlock" as they
call it). BUT, from what I've read from the article at
http://www.linuxjournal.com/article.php?sid=7010 ...they say the
same sort of operation will cause the other transaction to get an
older copy of that data (citing the multi-version concurrency system
in FB). Which is it? Does this matter on the properties of the
transaction that I start? With SET TRANSACTION, I only saw that NO
WAIT would cause the transaction to error out when it found a lock
on the data...nothing about getting an older version of the data...

Basically, I am just wondering if I need and want the explicit
pesimistic locking (as the release notes say its usage should be
very rare), or if I can get by with a dummy update to lock the
result. Plus, this is all ROW-level locking, right? So other
routines may be getting other data from other rows....definately
don't want table-level locking. :)


Thanks,

Robby