Subject Re: [firebird-support] Re: Lock conflict on no wait transaction
Author David Johnson
I'm always glad to learn more. I am a firm believer is using the right
hammer for the job, and I am always looking for new hammers, so please
feel free to expand your correction. I may not always agree with you,
but I respect your opinions and intimate knowledge of the practical
aspects of working with Firebird.

I don't object to lock conflicts, per se. As you say, they are the sign
that the DBMS is doing its job. Locks used to be my major means of
control until I started working in large scale systems.

However, since entering the world of large scale mission critical
systems, I have learned to object to an interactive process holding a
lock during the long timeframe of a user's edit (> 1 second).

Non-interactive processes have different constraints (the user can't
just walk away since there is no user) and they tend not to hold locks
for long time periods.

Either type of process requires appropriate retry logic for the
application.

Let me explain how I arrived at this position, and in turn I would love
to hear your suggestions as to what "should be", and how a multi-
versioning system aids in accomplishing this.

On a business critical tables in a large scale system, a 1 second delay
in a row update on core tables translates into measurable (three digits)
number of dollars in lost business.

The majority of table update transactions occur in our system without
human intervention (<50 ms).

If (and when) locks were our preferred control mechanism, a user
updating a record would (and did) lock out necessary business critical
updates.

At that time, our retry costs on the automated processes colliding with
user edits (and other retries) actually outweighed the actual business
processing costs. System performance was really bad, and we had
upgraded our hardware platform to its limit.

Use of this and related mechanisms reduced our processing costs and
improved overall system responsiveness to where, today, we are actually
using 25% less of our system processing capacity than we used to be, and
we are transacting 35% more business.

My suggested mechanism does not prevent locks - it merely guarantees
that the user cannot overwrite something that has changed underneath
them without reviewing it, and that business is not halted while someone
has something up on their screen for editing.

Locks, on the other hand, will stop business from being transacted (and
cash flow into the company). Once a certain threshold of transactions
are held up by locks (just a few hundred locks on key tables), the
entire system cascades into retry logic. The performance curve is flat
until it's vertical.

I understand that there are design "opportunities", but my bosses
disagree, and the company's past experiences attempting to correct the
design of the core tables have actually supported that the current table
design is the best compromise in terms of allowing business to be
transacted in a timely manner.


On Thu, 2005-07-07 at 23:59 +1000, Helen Borrie wrote:
> At 01:06 PM 7/07/2005 +0000, you wrote:
> >Yes,
> >
> >Of course you are right.
> >It is two writers, I realized that after I posted my message.
> >Right now I am working the steps that David proposed.
>
> Sven, for your mental health and the integrity of your database, I strongly
> urge you to ignore David's advice. Fix up your app so that it anticipates
> and HANDLES lock conflicts - which are the database engine's totally
> *intended* response when there is contention for an update.
>
> Sorry, David, but your posting is 100% "up the spout". You clearly need to
> get your head around how the multiversioning engine works, why lock
> conflicts occur (and why you should WELCOME them!) and -- key to the whole
> -- what consititutes an atomic database application task...
>
> ./heLen
>