Subject | Re[2]: [ib-support] Deadlock problem |
---|---|
Author | Helen Borrie |
Post date | 2001-11-14T12:36:20Z |
At 01:52 PM 14-11-01 +0200, you wrote:
Some of us prefer to reserve SNAPSHOT for tasks like reporting, data exports, period-end processing and so on, where a consistent view of the data is essential; but use READ COMMITTED with NO WAIT for busy interactive environments and resolve lock conflicts by rolling back and retrying. You have to cover this ground anyway, for handling database errors.
Some data interface tools don't give you much help with preserving rolled-back data or refreshing the READ COMMITTED view, so this approach may involve a lot of client code in some programming environments. At the super-duper extreme of component-based data connectivity there is IB Objects, which does all of this by magic. :)) *and* lets you set up to "pause" uncommitted transactions after a specified period has elapsed, thereby breaking deadlocks transparently.
One of the really nice things about FB/IB is that you have many combinations available in transaction management. If you understand that locking is done at row level, and optimistically, you will realise that a lock doesn't occur until a transaction actually has an update or delete pending. Except with special settings, transactions don't lock unmodified datasets: they only isolate their view of them.
One trick you can do is try to get a pessimistic lock on a row that your transaction is going to update. To do that, you perform a "dummy update" on the row before you do the real DML - typically,
UPDATE ATABLE SET ITS_PK=ITS_PK
WHERE ITS_PK= :ITS_PK;
If the transaction is able to do this, then it has the row locked and no other transaction can get that lock. It can then proceed to accept parameters for the real update statement and know that it won't deadlock. Inversely, if it can't do this, then it means some other transaction already has that lock. You can intercept that lock conflict message before your user starts editing, roll back and tell the user to attempt the edit later.
With this you just have to be careful to condition any Before Update triggers so that they won't fire when you do the dummy update. (But pessimistic locking can be a real bind if you have inconsiderate users and no timeout protection.
cheers,
Helen
All for Open and Open for All
InterBase Developer Initiative ยท http://www.interbase2000.org
_______________________________________________________
>Hello Ann,Whether you use consistency (SNAPSHOT) or READ COMMITTED, you have the job of designing your client so that it handles lock conflicts the way you want them handled. As Ann suggested, you could make use of WAIT...the benefit of this is that your transaction will wait until the other transaction has either committed or rolled back its changes and will then step in and commit, if it can. It has its disadvantages, too. If your application isn't taking care of long-running transactions, an inconsiderate user can lock up other transactions by disappearing for a coffee break. If you have a lot of users all potentially hitting the same rows in your tables, it's not hard to see how this approach can be a minefield.
>
>Thank you, for responce.
>
>Tuesday, November 13, 2001, 8:58:24 PM, you wrote:
>
>AWH> However, the rest of the world considers consistency to be a
>AWH> bore. For them, there is a transaction mode called read committed
>AWH> which will behave like DB2. (SET TRANSACTION READ COMMITTED)
>
>We tested "SET TRANSACTION READ COMMITTED" mode, and it gave us the same
>result. Is it any other sugestions?
Some of us prefer to reserve SNAPSHOT for tasks like reporting, data exports, period-end processing and so on, where a consistent view of the data is essential; but use READ COMMITTED with NO WAIT for busy interactive environments and resolve lock conflicts by rolling back and retrying. You have to cover this ground anyway, for handling database errors.
Some data interface tools don't give you much help with preserving rolled-back data or refreshing the READ COMMITTED view, so this approach may involve a lot of client code in some programming environments. At the super-duper extreme of component-based data connectivity there is IB Objects, which does all of this by magic. :)) *and* lets you set up to "pause" uncommitted transactions after a specified period has elapsed, thereby breaking deadlocks transparently.
One of the really nice things about FB/IB is that you have many combinations available in transaction management. If you understand that locking is done at row level, and optimistically, you will realise that a lock doesn't occur until a transaction actually has an update or delete pending. Except with special settings, transactions don't lock unmodified datasets: they only isolate their view of them.
One trick you can do is try to get a pessimistic lock on a row that your transaction is going to update. To do that, you perform a "dummy update" on the row before you do the real DML - typically,
UPDATE ATABLE SET ITS_PK=ITS_PK
WHERE ITS_PK= :ITS_PK;
If the transaction is able to do this, then it has the row locked and no other transaction can get that lock. It can then proceed to accept parameters for the real update statement and know that it won't deadlock. Inversely, if it can't do this, then it means some other transaction already has that lock. You can intercept that lock conflict message before your user starts editing, roll back and tell the user to attempt the edit later.
With this you just have to be careful to condition any Before Update triggers so that they won't fire when you do the dummy update. (But pessimistic locking can be a real bind if you have inconsiderate users and no timeout protection.
cheers,
Helen
All for Open and Open for All
InterBase Developer Initiative ยท http://www.interbase2000.org
_______________________________________________________