Subject For Fabiano Bonin
Author Helen Borrie
I've spent some time writing a little app that you can use to test the time
it is taking to do the metadata setup when you start an application. The
project source is here:

You can access the file at the URL

Please treat this message as your helptext for using it.

The program is designed to make a tcp/ip or local connection to the Fb 1.5
employee database in the default location on the server. If you have that
database somewhere else, you can change any of the segments of the
connection string on the form.

When the form opens, it is not connected to a database at all. When you
attach to the database (which can be on any machine in the network) it will
instantiate a TIBODatabase and open the Employee table as either a
TIBOTable or a TIBOQuery. There is a radio box where you can choose which.

Behind the form is a TIB_MonitorDialog. Before you do anything, bring it
forward and check the Timestamp field. There's a good reason for this...

OK, so choose the table or the query option and click the Attach
button. At the right is a box where the time in milliseconds between
calling Connect and completing the Open of the query or TIBOTable is
recorded. I can slow this down by including a login prompt but, with the
method implemented on the form, I have not been able to get the whole
process to take even one millisecond. It's simply too fast for the timer.

So that's why I say check the Timestamp field on the monitor. It
timestamps each API call, so you can see how long each metadata query is
taking. I can't get the attach + opening the query to last even for one
second, anywhere on the network. The TIBOTable I can sometimes nudge up to
1 second but I can't get it to take longer. Of course, this form is opening
one dataset. In the worst case, you have a main form that is starting up
with active datasets (that terrible old BDE default) and a lot of them.

Now, with the monitor, clear its output each time you go to make a new
attachment. As a demonstration for yourself, save the monitor output for
each option to its own textfile, and then run the two textfiles through
BeyondCompare or another Diff tool. You will soon see why table objects
eat so much bandwidth compared to query objects.

The table object of course needs *everything* because the TTable methods
require them. This all by itself is an object lesson in why table objects
are so bad for client/server networks.

Anyway, with the timings displayed on the form, plus the timestamps in the
monitor output, it should be pretty easy to find out where your connection
delays are coming from. You can replace the samples in the project with
your own actual datasets...

Hope this helps.