Subject | Re: enumeration table question |
---|---|
Author | Adam |
Post date | 2006-06-22T01:59:02Z |
--- In firebird-support@yahoogroups.com, "Stan" <shizius@...> wrote:
You should not be getting any conflict when you commit. You will get
an exception on the 10th insert. With Ann's suggested settings
(WAIT), you will then freeze until the transaction that is causing
the conflict commits or rolls back. If it commits, then you will
receive the lock conflict, but if it rolls back, you will succeed.
Because this logic requires it to wait until commit, the subsequent
attempt should always succeed, whereas using a NOWAIT - retry logic,
the first transaction may still be running. Worst case means that you
are fruitlessly giving the database select/insert cycles, and it is
so busy doing this it can't get around to actually finishing the
first transaction causing the problem.
Something I am not understanding, do the 200 inserts have to be an
atomic operation due to business rules? If you can independently run
each insert in its own transaction, you should get a better
throughput. What I mean by this is that if 10 fails, can't you put 10
in a queue to look at later and try 11. Once you get to 200, 10
should be free to process. Alternatively you could spawn a new thread
and in a separate connection use a WAIT transaction in that thread.
The problem you seem to encounter is that you have large overlaps in
your sets of data, and because your transactions are running for so
long, there is an extremely high probability of a conflict occurring.
Worse still, if your problem occurs at 190 and you then have to
rollback, you had effectively stopped 200 clients from inserting 190
specific values for the duration of that transaction.
If you are able to do 1 operation per transaction, then even though
the overhead of starting and committing transactions is higher, the
length of the transaction is shorter and the proportion of records
which are effectively locked to other transactions is much lower.
Adam
>Stan,
> Great information, thank you.
>
> I have hundreds of inserts just like this one to make
> from each of my clients. I do all the inserts from one
> client in 1 transaction.
>
> The reason that I use "WRITE|COMMITTED|REC_NO_VERSION|NOWAIT"
> transaction is that if there is a conflict lets say on the 10th (out
> of 200)insert I want to revert the transaction at this point and try
> again, instead of wasting time by processing the rest of the 190
> inserts and then getting a "conflict" when I commit.
>
> It does not make sense to create separate transactions for
> each insert, I tried this, and the performance is terrible.
>
> am I missing something?
You should not be getting any conflict when you commit. You will get
an exception on the 10th insert. With Ann's suggested settings
(WAIT), you will then freeze until the transaction that is causing
the conflict commits or rolls back. If it commits, then you will
receive the lock conflict, but if it rolls back, you will succeed.
Because this logic requires it to wait until commit, the subsequent
attempt should always succeed, whereas using a NOWAIT - retry logic,
the first transaction may still be running. Worst case means that you
are fruitlessly giving the database select/insert cycles, and it is
so busy doing this it can't get around to actually finishing the
first transaction causing the problem.
Something I am not understanding, do the 200 inserts have to be an
atomic operation due to business rules? If you can independently run
each insert in its own transaction, you should get a better
throughput. What I mean by this is that if 10 fails, can't you put 10
in a queue to look at later and try 11. Once you get to 200, 10
should be free to process. Alternatively you could spawn a new thread
and in a separate connection use a WAIT transaction in that thread.
The problem you seem to encounter is that you have large overlaps in
your sets of data, and because your transactions are running for so
long, there is an extremely high probability of a conflict occurring.
Worse still, if your problem occurs at 190 and you then have to
rollback, you had effectively stopped 200 clients from inserting 190
specific values for the duration of that transaction.
If you are able to do 1 operation per transaction, then even though
the overhead of starting and committing transactions is higher, the
length of the transaction is shorter and the proportion of records
which are effectively locked to other transactions is much lower.
Adam
>into
> thank you,
>
> stan
>
> --- In firebird-support@yahoogroups.com, "Ann W. Harrison"
> <aharrison@> wrote:
> >
> > Stan wrote:
> >
> > > CREATE TABLE enum1
> > > ( enum1_id BIGINT NOT NULL PRIMARY KEY,
> > > enum1_name VARCHAR(128) NOT NULL UNIQUE,
> > > enum1_description VARCHAR(128) DEFAULT '' );
> > >
> > > I am going to have 200+ clients simultaneously insert records
> > > this table. A lot of the inserts will be "duplicates": the
> > > unique constraint will refuse the insert.
> >
> > What your currently doing is inviting live-lock - operations
> > that run, fail, undo themselves, run, fail, undo, and so on,
> > wasting innumerable cycles for no benefit.
> >
> > OK. Here's what I would do. 1) use a concurrency transaction.
> > 2) use WAIT mode, not NO_WAIT. 3) don't use REC_NO_VERSION
> >
> > Define the procedure like this:
> >
> > CREATE OR ALTER PROCEDURE get_enum1_id( enum1 VARCHAR(128) )
> > RETURNS( group_row BIGINT )
> > AS BEGIN
> > SELECT enum1_id FROM enum1 WHERE enum1_name = :enum1 INTO
> :group_row;
> > IF (:group_row IS NULL) THEN
> > BEGIN
> > group_row = GEN_ID (enum1_generator, 1);
> > INSERT INTO enum1 VALUES(:group_row, :enum1,'');
> > END
> > SUSPEND;
> > END!!
> >
> > Start a transaction, invoke the procedure, and commit.
> >
> > If the procedure gets an error, start a new transaction,
> > and run it again. The second time, the select will succeed.
> >
> > This avoids serializing inserts, reports an error only when
> > there's a conflicting insert in progress.
> >
> > Regards,
> >
> >
> > Ann
> >
>