Subject | Re: This List - Mandatory Reading - List Moderator/Owner |
---|---|
Author | Jason Wharton |
Post date | 2000-12-04T20:49:13Z |
> I'm curious as to what you disagree with on the TTable stuff. Our productwe
> supports InterBase, MS SQL Server, Oracle, Sybase, Informix & Paradox and
> found nothing but problems when TTable was used against client/serverGlad you asked...
> database engines. TTable works great for Paradox.
TTable works great with InterBase too if your usage pattern and database are
able to take advantage of some fairly advanced BDE capabilities. Most people
are probably not aware of some nice things the BDE does with TTables.
First of all, you need to have the IndexFieldNames using columns that match
exactly an ASC and DESC index. This is because the BDE will use both an
ascending and a descending cursor to return rows to the dataset.
For example, when using a TQuery if you call the Last method it will fetch
in all the rows of the dataset until you get to the last one. But with
TTable, if conditions are right, instead of fetching all rows it simply
opens a descending cursor and fetches in the records at the end of the
dataset right away. As a result, it is very quick and you only fetch in the
rows necessary to the dataset to fill a grid or whatever. If the user
scrolls backwards it just fetches in more rows as you need them.
Where it gets really nice is if you did a Locate() based on the column that
the dataset is ordered by. In this case the BDE would use a ASC & DESC
cursor with input parameters based on the desired value. Thus, it fetches in
the record desired right away based on the input parameter and then with the
ascending and descending cursor records are fetched in as needed depending
on where the user scrolls from there and if a grid needs additional fetches
to paint it.
Now, I believe that if you added in a Filter or tried to do anything outside
of these narrow constraints for optimized interaction with the data that you
were all of a sudden facing some performance problems. It was a bit touchy
and hard to predict in all cases. So, most people probably never really
figure out how to reliably take advantage of these capabilities. Plus, it
isn't uncommon that applications would need more versatility than TTable
would allow.
So, the short of it is, because this is a bit obscure and limiting at best
most people simply discount it all together and talk about the TTable
component as if it is an abominable curse in the client/server world of data
access.
Why do I know this? This is the last significant feature of the BDE that IBO
doesn't fully emulate, yet... Well, I am just now in the finishing stages of
adding in this capability into IBO version 4 that should go beta here in a
week or so. It is finished and working in the native IBO datasets and I only
have left to finish the TDataset integration and touching up my visual
controls to cope with BOF not being the first of the buffer in some cases.
Previously this was always assumed but now I cannot do that.
I have had to study out the BDE's capability here a bit and I have
discovered that I am going to be able to make it much more flexible than it
was in TTable. I am introducing it in a way that will allow you to configure
it in queries (native and TDataset) as well as the TIBOTable component.
For example, if a customer dataset (containing a million records) is ordered
by name and you want to locate a customer record in it based on their
customer number, here is what will happen internally when calling the
locate: An internal work cursor is parsed together and setup based on the
original SELECT statement (taking all existing things into consideration
including input parameters). Then, the bookmark of the desired record is
fetched from the server (if it exists) so that it is very quick and not all
records have to be brought to the client. Then, that bookmark is plugged
into the dataset. The full individual record based on that bookmark is
pulled in based on another internal cursor IBO maintains. Then, the
dataset's refinement criteria is adjusted so that it will quickly fetch in
the records just before and after the desired record. If the sure scrolls
either way records are fetched in dynamically both ways. Thus, by using
automatic input parameters and internal cursors I am able to virtualize very
huge datasets into easily manageable work horses.
This was just a simple example but the flexibility is quite extensive with
IBO's implementation when using this feature. It is still able to be used
with most queries (haven't found one that doesn't work yet), with user
supplied input parameters, filters, locating on any columns or combinations
of columns, IBO's search mode, incremental searching, setrange, etc.
It is super easy to setup as well. Is all you do is add an attribute to the
OrderingLinks property entries to indicate you want horizontal dataset
refinement.
Those interested in being on the IBO version 4 BETA for this and some other
exciting capabilities are encouraged to send me a private email to be
included in the group.
Kind regards,
Jason Wharton
CPS - Mesa AZ
http://www.ibobjects.com