Subject | Re: [IB-Architect] Some IB questions. |
---|---|
Author | Charlie Caro |
Post date | 2000-06-01T21:10:23Z |
Ann Harrison wrote:
transactions. A QA tester noticed a difference in behavior when she ran a test
against a local database and a remote database.
The test started two transactions in a client program. The READ_COMMITTED
transaction opened a FOR UPDATE cursor. In another transaction, a searched
UPDATE statement modified one or more rows that were members of the result set
of the positioned update cursor for the READ_COMMITTED transaction.
If the test was run locally then the fetches from the positioned update cursor
would always see the update from the other transaction. However, to a remote
database, sometimes the updates would not be seen. The difference results
because the local (CLASSIC at the time) will fetch rows one at a time while the
IB remote protocol always prefetches over the remote pipeline.
This state-of-affairs bothered enough QA & RD folks to make the decision to turn
off the remote pipelining for FOR UPDATE cursors. This made the remote/local
behavior identical.
I disagreed on two counts:
1) I'm heavily biased toward performance and this change reduces retrieval by at
least an order of magnitude;
2) The anomaly naturally occurs between distinct clients; it's just a matter of
timing. You just don't notice it because everyone is "pointing'n clicking"
independently;
3) For various practical reasons (middleware, LCD etc.), most developers don't
use multiple transactions in an application.
I wanted to document the anomaly so that developers could code around it. Unless
the architects populi disagree, I would like to rollback the change in a future
release to restore the performance.
Hope this helps,
Charlie
>The origins of this behavior relate to the introduction of READ_COMMITTED
> >4.- What's the trick behind the idea that R/O result sets are sent with as
> >many records as possible inside a network packet whereas FOR UPDATE result
> >sets are sent with one record per network packet? What have I gained using
> >FOR UPDATE if live-recordsets in the server side are almost an illusion?
> >Usually, these live recordset are produced in the client side thanks to
> >buffering.
>
> Charlie? Help!
>
transactions. A QA tester noticed a difference in behavior when she ran a test
against a local database and a remote database.
The test started two transactions in a client program. The READ_COMMITTED
transaction opened a FOR UPDATE cursor. In another transaction, a searched
UPDATE statement modified one or more rows that were members of the result set
of the positioned update cursor for the READ_COMMITTED transaction.
If the test was run locally then the fetches from the positioned update cursor
would always see the update from the other transaction. However, to a remote
database, sometimes the updates would not be seen. The difference results
because the local (CLASSIC at the time) will fetch rows one at a time while the
IB remote protocol always prefetches over the remote pipeline.
This state-of-affairs bothered enough QA & RD folks to make the decision to turn
off the remote pipelining for FOR UPDATE cursors. This made the remote/local
behavior identical.
I disagreed on two counts:
1) I'm heavily biased toward performance and this change reduces retrieval by at
least an order of magnitude;
2) The anomaly naturally occurs between distinct clients; it's just a matter of
timing. You just don't notice it because everyone is "pointing'n clicking"
independently;
3) For various practical reasons (middleware, LCD etc.), most developers don't
use multiple transactions in an application.
I wanted to document the anomaly so that developers could code around it. Unless
the architects populi disagree, I would like to rollback the change in a future
release to restore the performance.
Hope this helps,
Charlie