Subject Re: enumeration table question
Author Stan
--- In firebird-support@yahoogroups.com, "Adam" <s3057043@...> wrote:
>
> --- In firebird-support@yahoogroups.com, "Stan" <shizius@> wrote:
> >
> > Hi All,
> >
> > I have a table:
> >
> > CREATE GENERATOR enum1_generator;
> > CREATE TABLE enum1
> > (
> > enum1_id BIGINT NOT NULL PRIMARY KEY,
> > enum1_name VARCHAR(128) NOT NULL UNIQUE,
> > enum1_description VARCHAR(128) DEFAULT ''
> > );
> >
> > I am going to have 200+ clients simultaneously insert records into
> > this table. A lot of the inserts will be "duplicates": the
> > unique constraint will refuse the insert.
> >
> > To do this I have created a procedure:
> >
> > CREATE OR ALTER PROCEDURE get_enum1_id( enum1 VARCHAR(128) )
> > RETURNS( group_row BIGINT )
> > AS
> > BEGIN
> > SELECT enum1_id FROM enum1 WHERE enum1_name = :enum1 INTO :group_row;
> > IF(:group_row IS NULL) THEN
> > BEGIN
> > group_row = GEN_ID (enum1_generator, 1);
> > INSERT INTO enum1 VALUES(:group_row, :enum1,'');
> > WHEN SQLCODE -803 DO
> > BEGIN
> > SELECT enum1_id FROM enum1 WHERE enum1_name = :enum1 INTO
:group_row;
> > END
> > END
> > SUSPEND;
> > END!!
> >
> >
> > It goes without saying that I get a lot of deadlocks. I define my
> > insert transacitons with:
> > IBASE_WRITE|IBASE_COMMITTED|IBASE_REC_NO_VERSION|IBASE_NOWAIT
> >
> > and when I get a deadlock, I simply back off and try again.
>
> Why try again? If you get a deadlock, it means the record is being
> inserted by someone else. For sure, flag it to 'confirm' later, but
> rather than simply delaying which will slow the process down, put it
> in a holding queue.

I try again because I need the id so I can associate the record
to other inserted data.


>
> >
> > Is this a good way to accomplish inserts with my usage scenario?
> > is there a better(faster, more stable) way that will avoid the
> > deadlocks? maybe using
> > explicit locks?
>
> One person adding to the table at a time, sure your deadlocks are gone
> but it will definately not be faster. I doubt your second select will
> do much. It is extremely unlikely that another transaction will commit
> in the 2 ms between the selects.

It happens, I used to get "unique constaint" failures during the
insert. I have up to 200-300 clients all potentially uploading
similar information.

>
> If possible, try and organise the clients so that they are unlikely to
> attempt to insert the same records at the same time.


The clients dont know about eachother, they need to "try" to insert
their data, if it already exists, they need to get the existing record
id's to associate their data with it.


>
> Another way is to insert all the records into a holding table without
> a unique constraint to worry about, then use a single transaction pump
> it from one table to the other using the insert into tablea select
> distinct * from tableB syntax.
>

This is a great idea. My problem is that my App is "realtime",
clients observe the database as it is being updated. Another issue
is that while I am asking about one table, the same situation happens
in 30-40 tables. For this to work I would have the same number of
"staging" tables.

I actually tried something like this: I defined my tables without
constaints, and once in a while, I ran a batch job to remove duplicate
records. The only issue is that for the "enumeration" tables all my
select queries had to use "distinct". I might try this again.


> Adam
>


thanks for the input,

-stan