Subject Re: [firebird-support] SELECT WITH LOCK, WAIT transaction, deadlock
Author Thomas Steinmaurer
Hi,

> 2016. 04. 28. 21:22 keltezéssel, Ann Harrison aharrison@...
> [firebird-support] írta:
>> Okay... What's the requirement that keeps you from using generators?
>
> I need number generators which give back sequence numbers without
> skipping and repeating. But don't know how many needed. The number of
> needed generators depend on the customers(X Ltd., Y Ltd, ...). This is
> why I want to store the generators in a table.
>
>> Do you intend to grab the number in a transaction that does other work?
>
> The transaction do other work before grab the number and after grab the
> number. The whole thing is in a (one) stored procedure. (Works like a
> charm except deadlocks.)
>
>> What is the concurrency requirement?
>
> If I understand your question correctly... I don't think will be high
> load but I test is because not want to see deadlock in production.
>
>> What you're thinking about doing
>> will serialize the generation of identifiers and all other actions of
>> those transactions. I'm not totally convinced that using
>> ReadCommitted/wait will create an auditable series of numbers in the
>> case where a transaction fails.
>
> For testing purposes the test app insert the generated numbers into a
> one field table which have a unique constraint. I not see any error
> except the deadlocks.

I don't see deadlocks with the following test case.

1) Create some tables and a stored procedure doing the access + increment

CREATE TABLE GENERATOR_TABLE
(
ID BIGINT NOT NULL
);
CREATE TABLE GENERATOR_TABLE_LOG
(
ID BIGINT NOT NULL,
ID_NEW BIGINT NOT NULL,
ID_DIFF BIGINT NOT NULL
);

ALTER TABLE GENERATOR_TABLE ADD CONSTRAINT PK_GENERATOR_TABLE PRIMARY KEY
(ID);

ALTER TABLE GENERATOR_TABLE_LOG ADD CONSTRAINT PK_GENERATOR_TABLE_LOG PRIMARY KEY
(ID);

SET TERM ^^ ;
CREATE PROCEDURE P_INCREMENT returns (
ID BigInt) AS
BEGIN
SUSPEND;
END ^^
SET TERM ; ^^
SET TERM ^^ ;
ALTER PROCEDURE P_INCREMENT returns (
ID BigInt) AS
declare i bigint;
declare id_new bigint;
begin
i = 1;
while (i <= 10000) do
begin
in autonomous transaction do
begin
select id from generator_table with lock into :id;
id_new = id + 1;
insert into generator_table_log (id, id_new, id_diff) values (:id, :id_new, :id_new - :id);
update GENERATOR_TABLE set id = :id_new where id = :id;
i = i + 1;
end
end
suspend;
end ^^
SET TERM ; ^^

COMMIT;

INSERT INTO GENERATOR_TABLE (ID) VALUES (0);
COMMIT;


The autonomous transaction stuff is used to split up the look into smaller chunks from a perspective of the other isql sessions, otherwise conccurent transactions will wait, until the entire SP is finished from the isql session.


2) Fire up e.g. 5 isql sessions and execute the following concurrently:

SET TRANSACTION READ WRITE ISOLATION LEVEL READ COMMITTED RECORD_VERSION WAIT;
EXECUTE PROCEDURE P_INCREMENT;


3) Result

SQL> select count(*), min(id_new), max(id_new) from generator_table_log;

COUNT MIN MAX
===================== ===================== =====================
50000 1 50000


No gaps, no deadlock messages, no pk/unique violation in the log table etc ...

It is crucial that accessing the central generator table is tunneled through a single access point. The SP in our case. Any chance that anything else is accessing the table in your scenario resulting in the deadlock / write update conflict?



--
With regards,
Thomas Steinmaurer
http://www.upscene.com

Professional Tools and Services for Firebird
FB TraceManager, IB LogManager, Database Health Check, Tuning etc.