Subject Re: [IBO] Yet Another BDE emulation issue. Transactional Isolation issue I think.
Author Helen Borrie
At 12:35 AM 30/06/2007, you wrote:
>Are these getting to the list?

Yes, but I have to confess it's rather like trying to swat a fly in a
room with a high ceiling. There are problems within problems here
(as often occurs when an old BDE app has been converted: the BDE is
really NOT a good environment for learning how datasets from
client/server databases work....)

>At any rate, I changed the start trans, try to commit or rollbacks to
>start trans, try commitretaining or rollbacks, then did a if in trans
>rollback when I closed out the list. I did this several places and it
>seems to better emulate the bde.

Can't make any sense out of this...but, basically, you should regard
"emulating the BDE" as something you do initially to get your app
functioning, not an objective in itself.

>I think what ibo does is put everything
>under the same transaction once you start a new one on a cached
>update...the bde left the data before you starting the transaction in the
>old transaction state somehow...that is ibo is refreshing the data too
>well :).

This indicates confusion. What Cached Updates does is to separate
the dataset from its "live" connection to the underlying cursor set -
regardless of whether you are using the BDE, IBO, IBX or
whatever. When you call ApplyUpdates, you're actually asking the
interface to post any changes. The Commit happens when the
*transaction* is committed. And then, confusingly, CommitUpdates
clears the cache. Where IBO behaves differently to the others is
that it retains the cache after the commit. You'll get a fresh cache
when you refresh.

You don't really have much use for cached updates with IBO. It's
there for compatibility and, in the longer term, in case you have a
special need for it. From what you (seem to) have described, it's
just extra overhead and confusion...but you might like to explain why
you're needing to use it.

>Both the bde and ibo were set to read commit so I am at a loss
>as to why they act different...maybe because bde has more middleware in
>there or something?

You should understand that Read Committed is a transaction
isolation. What it does is to make the latest committed state of the
database available to the transaction. It is setting a parameter
that the *server* responds to with the appropriate
doesn't somehow tell your datasets to refresh themselves if database
state changes after the transaction starts! You have to ask for that
- by calling the Refresh method of the dataset. You can implement a
technique known as DML Caching to make this happen automatically, but
I'd recommend getting your head around what the various transaction
isolation settings mean, before attempting that.

>At any rate this also sped up my application somewhat too on the remote
>sites, now updates that took 8-10 seconds are back down to about a 1/2
>second. Once I go through and make sure ibo is using the best plan (which
>it does not do so well where the bde guess good...

Neither the BDE nor IBO (nor Java, nor PHP, nor Ruby, nor whatever)
has anything to do with the plan. It is created by the server's
optimizer, on the basis of the SQL statement you give it. And what
the optimizer generates by way of a plan is governed a lot (although
not by any means completely) by the indexes available.

>probably because I am
>using firebird 1.0 on dialect 1, and its optimized for the 2.0 engine on
>dialect 3 maybe), this thing should really fly.

More wrong assumptions....however, I'm pretty certain that your SQL
will need some repair work to give correct output and handling with
IBO 4.7. The fact that your improper syntax seems to work (or has
done so in the past) has everything to do with the fact that
InterBase and that very old Firebird version you are using were more
tolerant of improper syntax.

IBO 4.7 has had extensive refactoring to take into account the
increasing restrictions on syntax anomales that have occurred with
each successive release of Firebird, particularly affecting relation
aliases. So, there's probably some potential for your existing app
to go pear-shaped with IBO 4.7 unless you set about finding and
fixing SQL. Myself, I'd want to do that, so that I could be certain
that I'm getting the results I expect. The "GIGO" principle applies
to SQL just as much to any logical process.

>See my other post "My IBO
>port is too slow...need performance advice" as this seems to help with
>that too. I would still appreciate some general performance advice

From one of your postings, I noticed you thought that your view
should be using an index. It won't, because the supposed "key" is
not associated with an index, due to the fact that SELECT DISTINCT is
an aggregating operation. You would get far better performance from
that query by doing the join directly, table-to-table, across those keys.

DO supply output names for your derived fields - always. Different
versions of IB and Firebird handle unidentified derived fields in
different ways and assigning identifiers to them specifically is not
just good practice, it ensures that your client code is stable and
unequivocal when managing these fields.

Another misapprehension I noted in your postings was your belief that
the client side can somehow do something with indexes to make queries
run can't...that's for Paradox and some other datastorage
systems that store indexes in files that the client application can
read (via the driver layer for that system). You *can* have
client-side caching of metadata with IBO, which has a high cost in
terms of the initial connection but which IBO can refer to for some
operations. It can be handy for very slow connections though,
especially if the metadata doesn't change often.

IBO *does* manage a number of data arrays for datasets, though. A
highly important one, for both performance and data integrity, is the
KeyFields array, which is built from the KeyLinks of the set. IBO
can do this automatically (if KeyLinksAutoDefine is set) with a query
over a single table that has a primary key. With joined sets, you
need to turn off KeyLinksAutoDefine and define the KeyLinks yourself,
because the client side has no way to determine the uniqueness of
such sets. When IBO has nothing to go on, it will use the DB_Key
(rdb$db_key, which is present on all SELECT output, and always unique
within a transaction context).

When you have a sound query and your KeyLinks are correct, IBO does
miracles with fetching data from sets that are being treated as
bi-directional, through a magical technique with smoke, mirrors,
KeyLinks and up to three live cursors on the row buffer at the
server. It's nothing like the stumpy, lumpy stuff that the BDE does
with Paradox indexes, either, so throw away all those old Paradox assumptions.

I could go on and on if I wasn't up-to-me-ears in stuff but, really,
you'll do yourself a lot of favours just by taking yourself through
the various tech info sheets at the IBO website. You are far from
being the first person who ever needed to get past old BDE warts in
order to make optimal use of IBO...if you were, those tech info
sheets would not exist.

If you bump into some insoluble problems with the new-generation
parsing in IBO 4.7, contact me privately and I'll try to set you up
with an older, more tolerant version of IBO.