Subject RE: [ib-support] General Multi-User Architecture Question
Author Alan McDonald
I remember that the first win3.11 network setup I used maintained one copy
of win3.11 on the server and all users would run this single copy (less
client configs on the client side). So the setup is not impossible. But I
agree it is not my norm nor is it the norm recommended to me.
When I did my first C/S application, cached updates were the only way to go,
everyone saying that it was imperative to keep network traffic to a minimum.
On the other hand we have Jason Wharton (IBObjects components for IB) who
has always, in my memory, argued that cached updates are not good
architecture especially where a decent LAN infrastructure is available
because there is a far more important need to keep transaction turning over
and work constantly committing. I agree with this philosophy and use his
components to this end.

If the application were parcelled up into small modules and the initial
exectuable were small and use of other modules was infrequent, then I might
(only might) be tempted into this setup. A 6.5Mb exectuable is, as you say,
quite large to have to transfer everytime someon starts their application.
If they close and reopen thru the day, then the LAN will suffer as well as
each user from unwarranted traffic. Remember a trasnfer of this type is not
throttled so the LAN would need to be faultlessly 100Mbits/s to perform
without frustration in the first instance. On the other hand, it may make
updating the application easier and faster for the developer.

Bottom line: I would question the setup you have been given, and listen to
the arguments presented for it, then test the application startup and any
impact on your network as a result.


-----Original Message-----
From: Gary [mailto:glablj@...]
Sent: Thursday, 20 March 2003 8:25 AM
Subject: [ib-support] General Multi-User Architecture Question

Hi All,

Thanks in advance for the thoughts on this one...

We are having an application developed out-of-house. It replaces an
older application that was poorly designed for LAN (and WAN)
environments. This replacement app has been hailed by the developers
as following a client-server architecture. When I take a close look
at how it is installed, I am concerned that such is not the case.

Installing goes like this:
1. Install InterBase on the Server (set up certificates, etc.)
2. Install app server-side components on the same Server (Compaq
ProLiant 3000)
3. Install client-side components on the clients.

Now that looks correct so far, as a general rule (in my mind).

A close examination of what constitutes server-side components and
what are client-side components in this case has me puzzled. It looks
like this:
1. InterBase and the GDB file are on the Server
2. The application executable (Delphi 6 app) are also installed on
the Server
3. User config files, such as grid config, workstation config, etc.,
are stored on the clients.

Having the 6.5 MB executable on the Server doesn't make sense in my
mind. With that configuration each client must pull the executable
across the network as well as the data. That seems like a poor client-
server design (at the basic app level) for one and, it also seems
like a generally inefficient model for network performance,
especially where the customers have dial-up WAN situations.

My Design Idea:
1. Install InterBase (and the GDB) on the Server
2. Install any absolutely required server-side components for the app
on the Server
3. Install all binaries (the EXE, DLLs, etc.) on the clients
4. Use the BDE and ODBC to point to the Server share

Can anyone comment on this, and correct my design idea if it's
completely hosed?

Again, very much appreciated.

Best to all,

To unsubscribe from this group, send an email to:

Your use of Yahoo! Groups is subject to