Subject Re: [IBO] TIB_Query buffering
Author Helen Borrie (TeamIBO)
At 05:35 AM 21-04-02 +0000, George Helmke wrote:
>One of my performance increasing tricks has been to take a query and put
>the contents in an array. Then, I do all of my operations on the array,
>and put it back in the query when I am done. This works well for a GUI
>where the user is jockeying 100 different elements around the screen.

It seems as if you are re-doing what IBO does itself with Fields[],
Params[] and KeyFields[].

>What I am wondering is - what are the performance differences between this
>and just using a TIB_Query as is, with locates, etc?

"TIB_Query as is, with locates, etc" is a bit contradictory. Locates, etc.
are a legacy developers bring to IBO from their previous work with Delphi's
native data access model, which was designed for desktop databases. In the
native IBO data access, there is a lot going on under the hood that could
fool you into thinking it's working with everything in memory, when it's
actually shuffling things around in the buffers (which are arrays) and
fetching chunks of data across the wire on an "as-required" basis. If your
application wants to (logically) pull everything across the wire and
operate on it in memory, like a Paradox application, IBO will let it, and
it will be highly inefficient: it's contrary to the efficient C/S model,
where the GUI's job should be to act as a traffic cop between the user and
the API. A well-designed C/S application defers to the server for data

>I would assume that writing would be slower, because of the overhead
>involved in actually posting the changes to the server.

Slower than what? All writes are DML queries. If you process batches of
data on the client on a row-by-row basis (i.e. in the typical "while not
eof" loop) it's the slowest thing you can do. Every row you modify has to
get to the API and then cross the wire as a query. And, because you are
making the client machine do the processing work, you are adding a whole
bunch of extra work that the application has to do before statements even
leave the API.

You just don't do row-by-row processing on the client if you want
efficiency. Process single-row (interactive, positioned) updates and
inserts in the buffer, by all means, but, for batched operations, use the
GUI to collect parameters for a stored procedure or a searched DML
operation (update, delete) and get the server to perform the processing.

>What about reading? How does that compare?

Compare to what? If by "reading" you mean the process of fetching rows
across the wire for a dataset, I think you'll get a clearer understanding
of what happens with datasets by reading Jason's description of the IBO4
release (you can link to it from the home page of the website, from the
block where the latest IBO version is displayed). There, he goes into a
lot of detail about the buffering work.

>If you know the index of an array, you can get to that particular element
>in no time. Is there some way to use the buffer in TIB_Query in a similar
>fashion with the KeyLinks parameter?

Yes, the keys (keylinks) for the rows currently in the buffer are stored in
the KeyFields[] array. If you study the component help for the base
classes of the dataset classes, you'll get a feel for what's happening and
also pointers to the methods and properties that are there. Of especial
interest to you will be these three methods:
function SeekKeyForBufferFields: boolean;
function LookupKeyForBufferFields: boolean;
function LookupKeyForFields: boolean;

Helen Borrie (TeamIBO Support)

** Please don't email your support questions privately **
Ask on the list and everyone benefits
Don't forget the IB Objects online FAQ - link from any page at