Subject | Re: [IBO] referential integrity, locking... |
---|---|
Author | Helen Borrie |
Post date | 2004-01-10T04:03:25Z |
At 01:18 AM 10/01/2004 +0100, you wrote:
different, incompatible ways of implementing pessimistic locking.
transaction isolation achieves, *without* pessimistic locking.
combination of transaction isolation and WAIT strategy. In a normal
strategy (optimistic locking) a lock does not kick in until you POST an
actual UPDATE or DELETE statement.
A pessimistic lock can be made to force the transaction's settings to kick
in before the transaction actually submits the request for the change or
deletion.
The LockSQL property in IBO is one part of a client-initiated pessimistic
locking behaviour that works at dataset level rather than transaction
level. It is a hack.
You would need PessimisticLocking set true on the dataset. By default, IBO
will then send a "dummy update" to the server whenever the application
calls Edit or Delete, by "updating" the primary key column(s) of the
current row to its (their) current value. This will either succeed (the
current transaction has "locked" the row and no other transaction may write
another version of it) or fail (another transaction has already locked the
row).
The LockSQL allows you to provide your own SQL statement for the lock if
there is some reason why the default one causes a problem. This can be
part of a "workaround" strategy to coordinate with conditional logic in a
trigger, to prevent unwanted trigger actions from occurring as a result of
the locking statement.
You don't have to do anything yourself to protect referential
integrity. The database takes care of locking rows in other tables that
are linked to your row by RI dependencies.
Pessimistic locking of any sort is *not* for everyday use. It should be a
last resort, for a task where you have a specific requirement for strict
serialization that cannot be met by the transaction architecture. In the
real world, that kind of requirement is pretty rare. In tasks where you
decide you have to use it, you had better make certain that your
application is not going to hold pessimistic locks of long
duration. You'll have a lot of extra things to take care of to ensure
quick turnover of transactions via hard Commits...
Helen
>What does dataset's LockSQL method? is it add WITH LOCK for a selectNo, it has nothing to do with WITH LOCK except that you are looking at two
>statement?
different, incompatible ways of implementing pessimistic locking.
>is it assure, that if some records are used by transaction, then anotherBroadly, that is what a pessimistic lock does. But it is also what
>transaction/session can't delete from from the DB?
transaction isolation achieves, *without* pessimistic locking.
>has it something with the pessimictic locking?Yes, but nothing to do with the Firebird 1.5 "FOR UPDATE WITH LOCK" syntax.
>I need to quarantee, that ifThe "right way" to do this in Firebird is to choose the appropriate
>software uses some data from a DB, then other software instance couldn't
>break referencial integrity - how to do this in Firebird server-client
>architecture?
combination of transaction isolation and WAIT strategy. In a normal
strategy (optimistic locking) a lock does not kick in until you POST an
actual UPDATE or DELETE statement.
A pessimistic lock can be made to force the transaction's settings to kick
in before the transaction actually submits the request for the change or
deletion.
The LockSQL property in IBO is one part of a client-initiated pessimistic
locking behaviour that works at dataset level rather than transaction
level. It is a hack.
You would need PessimisticLocking set true on the dataset. By default, IBO
will then send a "dummy update" to the server whenever the application
calls Edit or Delete, by "updating" the primary key column(s) of the
current row to its (their) current value. This will either succeed (the
current transaction has "locked" the row and no other transaction may write
another version of it) or fail (another transaction has already locked the
row).
The LockSQL allows you to provide your own SQL statement for the lock if
there is some reason why the default one causes a problem. This can be
part of a "workaround" strategy to coordinate with conditional logic in a
trigger, to prevent unwanted trigger actions from occurring as a result of
the locking statement.
You don't have to do anything yourself to protect referential
integrity. The database takes care of locking rows in other tables that
are linked to your row by RI dependencies.
Pessimistic locking of any sort is *not* for everyday use. It should be a
last resort, for a task where you have a specific requirement for strict
serialization that cannot be met by the transaction architecture. In the
real world, that kind of requirement is pretty rare. In tasks where you
decide you have to use it, you had better make certain that your
application is not going to hold pessimistic locks of long
duration. You'll have a lot of extra things to take care of to ensure
quick turnover of transactions via hard Commits...
Helen