Subject | Re: [ib-support] Deadlocks |
---|---|
Author | Helen Borrie |
Post date | 2001-01-06T06:20:47Z |
At 10:12 AM 05-01-01 -0700, you wrote:
are outside transaction control. Once a number has been generated, the
transaction that took it has it, no further ado. That number won't be
generated again.
When you have multiple users potentially all trying to update the same row
in what's (effectively) a generator table, you are stuck with transaction
control to prevent overwriting. You're certain to get as many lock
conflicts as you have users accessing inserting to any single table,
because one transaction already has an uncommitted update pending when
another wants to do the same thing to it. Whatever you do is going to be
"best of a bad job" and this isn't a nice way to generate unique
ids. (This isn't just in IB, it's in any database. In dbs without
transaction control and pessimistic locking, you are going to get
overwrites and key violations instead.)
You want to do
update id_table set next_id = next_id + 1 where table_name = ?
select next_id from id_table where table_name = ?
commit
Now, this won't work unless it's in a stored procedure, I guess you realise
that. You can pass only one dynamic SQL statement at a time. And here is
your trap with DSQL:
Statement 1:
update id_table set next_id = next_id + 1 where table_name = ?
commit
Statement 2:
select next_id from id_table where table_name = ?
commit
How are you going to know whether Statement 2 will return the value as
updated by Statement 1?
Assuming your id_table has a structure something like this:
id_table
table_name varchar(31)
next_id numeric(18,0)
create procedure get_table_id(the_table varchar(31))
returns (the_id numeric(18,0))
as
begin
update id_table set table_name = table_name
where table_name = :the_table; /* try to get an exclusive lock */
select next_id from id_table
where table_name = :the_table
into :the_id;
the_id = :the_id + 1;
update id_table set next_id = :the_id
where table_name = :the_table;
SUSPEND;
end
All your application has to do is
select the_id from get_table_id('fish_species')
although if you want this to be portable to a PL that doesn't support this
syntax, you'll be best using a more standard means to get back the return
value.
To reduce the length of time this row is blocked for other users, make the
call to this procedure the only one in its own transaction context and use
the fastest possible way to have it commit, e.g. in Delphi set Autocommit
true.
Use SNAPSHOT isolation, use WAIT for conflict resolution and have your
client handle ANY exception by doing ROLLBACK, i.e. don't provide any
fallback for fixing errors until **after** the transaction is rolled back.
Caution: DON'T use SNAPSHOT TABLE STABILITY because that will lock out all
users.
If there is no exception ( i.e. a table_name value is found to match the
input argument) the procedure will wait until it can get an exclusive lock
by being allowed to do the dummy update on the table_name column. Once it
has this lock, there is no possibility that another transaction will read
the same initial value for next_id. It will hold the lock (thus excluding
other transactions from seeing this row) until your app commits the procedure.
With WAIT true, you won't get any lock resolution message if the procedure
fails to get a lock - it will just wait indefinitely until a lock is
available.
If you have a lot of users inserting, you might want to put a timer in
there somewhere, set a timeout limit and have the transaction roll back if
it's not committed in some user-tolerable period. Deliver some message
like "Request timed out after n seconds" which at least will reduce the
queue and alert someone that something is holding things up. It'll also
help QA when the design review comes up and it will be amusing to otherwise
frustrated users to run sweepstakes on how many timeouts each workstation
will get today.
There are other strategies, e.g. use NOWAIT for conflict resolution and put
a retry handler in your client app. I guess it gets down to local demand
and what the users will tolerate.
Where this could get really horrible is if you have multiple users doing
batch inserts or updates simultaneously. Then you'll *really* start to
wonder how important it is to have a vendor-independent id generator.
H.
All for Open and Open for All
InterBase Developer Initiative ยท http://www.interbase2000.org
_______________________________________________________
>I have code that returns back unique ids (I'm not using Interbase generatorsDoing it this way you are losing the benefit of the fact that generators
>because I want it to work with other databases). The code basically does
>these commands:
>
>update id_table set next_id = next_id + 1 where table_name = ?
>select next_id from id_table where table_name = ?
>commit
>
>I'm using the native Interbase API and I get errors from isc_dsql_execute.
>When I run two copies of the program I get these errors (as seen from using
>isc_print_status):
>
> deadlock
> -update conflicts with concurrent update
>
>I've tried different transaction options in isc_start_transaction but I can't
>seem to find a good combination. And I don't understand where the deadlock
>is coming from in the above case.
>
>One thing I would like is a document explaining how Interbase does locking.
>When using Sybase SQL Anywhere, it came with a section talking about the
>different lock types (shared, exclusive, ...) and told you which were applied
>when (on select, update, ...) so I could know just what kinds of locking
>problems I faced. In the case of Sybase the above works since the one
>program will get an exclusive lock on the table_name row in the id_table when
>the update is done. If another program tries to do the same thing, it will
>wait for the first lock to be released and then grab the lock for itself.
>
>Any hints on getting this working with Interbase?
are outside transaction control. Once a number has been generated, the
transaction that took it has it, no further ado. That number won't be
generated again.
When you have multiple users potentially all trying to update the same row
in what's (effectively) a generator table, you are stuck with transaction
control to prevent overwriting. You're certain to get as many lock
conflicts as you have users accessing inserting to any single table,
because one transaction already has an uncommitted update pending when
another wants to do the same thing to it. Whatever you do is going to be
"best of a bad job" and this isn't a nice way to generate unique
ids. (This isn't just in IB, it's in any database. In dbs without
transaction control and pessimistic locking, you are going to get
overwrites and key violations instead.)
You want to do
update id_table set next_id = next_id + 1 where table_name = ?
select next_id from id_table where table_name = ?
commit
Now, this won't work unless it's in a stored procedure, I guess you realise
that. You can pass only one dynamic SQL statement at a time. And here is
your trap with DSQL:
Statement 1:
update id_table set next_id = next_id + 1 where table_name = ?
commit
Statement 2:
select next_id from id_table where table_name = ?
commit
How are you going to know whether Statement 2 will return the value as
updated by Statement 1?
Assuming your id_table has a structure something like this:
id_table
table_name varchar(31)
next_id numeric(18,0)
create procedure get_table_id(the_table varchar(31))
returns (the_id numeric(18,0))
as
begin
update id_table set table_name = table_name
where table_name = :the_table; /* try to get an exclusive lock */
select next_id from id_table
where table_name = :the_table
into :the_id;
the_id = :the_id + 1;
update id_table set next_id = :the_id
where table_name = :the_table;
SUSPEND;
end
All your application has to do is
select the_id from get_table_id('fish_species')
although if you want this to be portable to a PL that doesn't support this
syntax, you'll be best using a more standard means to get back the return
value.
To reduce the length of time this row is blocked for other users, make the
call to this procedure the only one in its own transaction context and use
the fastest possible way to have it commit, e.g. in Delphi set Autocommit
true.
Use SNAPSHOT isolation, use WAIT for conflict resolution and have your
client handle ANY exception by doing ROLLBACK, i.e. don't provide any
fallback for fixing errors until **after** the transaction is rolled back.
Caution: DON'T use SNAPSHOT TABLE STABILITY because that will lock out all
users.
If there is no exception ( i.e. a table_name value is found to match the
input argument) the procedure will wait until it can get an exclusive lock
by being allowed to do the dummy update on the table_name column. Once it
has this lock, there is no possibility that another transaction will read
the same initial value for next_id. It will hold the lock (thus excluding
other transactions from seeing this row) until your app commits the procedure.
With WAIT true, you won't get any lock resolution message if the procedure
fails to get a lock - it will just wait indefinitely until a lock is
available.
If you have a lot of users inserting, you might want to put a timer in
there somewhere, set a timeout limit and have the transaction roll back if
it's not committed in some user-tolerable period. Deliver some message
like "Request timed out after n seconds" which at least will reduce the
queue and alert someone that something is holding things up. It'll also
help QA when the design review comes up and it will be amusing to otherwise
frustrated users to run sweepstakes on how many timeouts each workstation
will get today.
There are other strategies, e.g. use NOWAIT for conflict resolution and put
a retry handler in your client app. I guess it gets down to local demand
and what the users will tolerate.
Where this could get really horrible is if you have multiple users doing
batch inserts or updates simultaneously. Then you'll *really* start to
wonder how important it is to have a vendor-independent id generator.
H.
All for Open and Open for All
InterBase Developer Initiative ยท http://www.interbase2000.org
_______________________________________________________