Subject Re: [IBO] transaction behavior
Author Helen Borrie
James,

At 09:43 AM 29/05/2004 +0800, you wrote:
>Hi Lester
>Lester Caine wrote:
>
> > So the 'search query' is stand alone, with it's own transaction, and
> > called before/after each new record is created?
> >
> > I don't see why you need to worry about the 'search query' seeing
> > anything of the 'new transaction' until it is ready to post, and then
> > the next 'cycle' the 'search query' will include the latest info.
> > The 'new transaction' should take care of checking any duplications it
> > is creating internally before the transaction is committed?
> >
> > OR are you in a situation where several users could be adding the same
> > new stuff, so the checks need to be made when a transaction is completed?
> >
> > The new question is "How are duplicates created, and could THAT be
> > handled better with a generator?" Once again a little knowledge is a
> > dangerous thing :) Without the whole picture suggestions for improvement
> > are difficult ;)
>
>Thank you for your interest in this matter. The program is use to input
>all the invoices from various suppliers. It is a common mistake by the
>user to input twice or more the invoice which result to duplicate
>records, I don't know why? But just to prevent from posting the same
>invoice I want the program to check for possible duplicate records thru
>supplierid and invoice no. And since this is a master details setup
>wherein the IBO perform a post retaining for the master dataset, I came
>up with another IB_Query with its own ib_transaction to avoid seeing the
>current record being inserted to avoid seeing it because of the post
>retaining.
>
>
>Iam sorry for my poor english, hope you get the picture and have my poor
>solution improve.

I think you are making some wrong assumptions, perhaps through not
understanding transaction isolation. You really must get around to
visualizing a transaction as a wrapper for a single task, that either fails
completely or succeeds completely.

By trying to run a single task in two separate transactions, you risk
causing exactly the problems that transaction isolation is designed to
prevent. If you need to check for an existing duplicate, then design the
task so that it is done in the same context.

The proper way to do this would be with a BEFORE INSERT trigger on the
Invoice table and using a custom exception to block the operation and alert
the user, something like:

create exception duplicate_invoice 'This invoice has already been entered';
commit;

create trigger bi_check_for_duplicates for invoice
active before insert position 0 as
begin
if (exists (
select 1 from invoice
where cust_id = new.cust_id
and invoice_date = new.invoice_date
and cust_ref = new.cust_ref) then
exception duplicate_invoice;
end

Put the Insert action in the client inside a TRY...EXCEPT loop, have the
app look for that exception message and you have your totally atomic
solution. The exception stops the insert statement from being posted and
the ball is back in the application's court to resolve - perhaps by
cancelling the insert (which is now only in the client buffer, and not
posted to the database) and rolling back the transaction, or perhaps by
getting the user to fix the error and resubmit the request.

However, remember that NO transaction can see work that has not been
committed by another transaction. Even this transaction could fail if,
between its start and commit, another transaction with a lower system
transaction id committed the same combination of unique data items. This
can happen even with ReadCommitted, depending on the NoWait and RecVersion
settings on the transaction. In almost all conditions, the transaction
with the lowest transaction id will win.

This system transaction id has nothing to do with any ID that the client,
i.e. IBO, gives to it. It is known in the database; in Fb 1.5 you can read
the ID of your own current transaction (CURRENT_TRANSACTION). You can
never find out whether you have the newest transaction id affecting this
record, other than by attempting to post.

It's a serious flaw in design to try to work around concurrency protection
in the way you are trying to do it. It doesn't work; and you are risking
unpredictable logical corruption. (Logical corruption means bad data that
isn't detected by the engine).

Helen