Subject Re: [IB-Architect] Re: 'approximately' 400 user connections
Author Benny Schaich
Hi Markus,

you are totally right getting confused about measurements of database performance.
The users connecting highly depend on what they do and if they do anything at all.
Therefore the knowledgeable customer's usuall question if you talk aboutuser
counts is: "Do you talk about concurrent users?"

Nevertheless this also is quite inaccurate as it depends on what the clients do.
One very neat example was when I encountered the mainframe world and found the
following problem: A very high user base (lets say 10.000) accesses a table with
significant less records (about 1000). In this case it is actually intelligent to
go through the table sequentially and deliver the current record to all users
waiting for it instead searching the right record for every request. Something
that IB couldn't do (and I actually was asked this some years ago).

On the other hand the MGA makes Interbase unbeatable fast in other situations -
especially with high user counts. Unfortunately this never was used marketingwise
and it is not easy to do so. Especially as Multi Generations in Oracle are
simulated with "Versioning" which means nothing but copying the data and because
of that is very resource intensive - guess what oracle freaks used to ask me when
I told them about versioning in IB...

But to your question:

The real thing once you have high user counts are: reliability, reliability and
reliability.
If you put 1000 users on a DB and it doesn't run for just an hour, you can
immediately calculate what that costs by adding peoples income.
The only way to be sure of that is to have the ability of failovers. And this
starts to involve other systems then just the database.

One thing that is nice for this capability in Interbase is the shadows function.
Unfortunately this is only usable as long as the ib process itself is running. To
my knowledge a shadow cannot be activated from an outside process.


> Case 1:
>
> Connections Supported [ n ]
>
> This in my mind is a marketing thing. A developer has a
> system in mind and is evaluating InterBase thus, they look
> for a feature/functionality statement that says we support
> [ n ] connections.

...and its not a database problem. Once source is open, you'll find that you can
open as much connections as you want to (as the manual states) as long the OS
supports this.

> Case 3:
>
> As an example, consider a robot driven assembly line system
> where there are a 1000 robot running 24x7 capturing data.
> If a 1000 live connections is required to make this possible
> perhaps IB is not yet the server for this application. Such
> a system could be designed with InterBase (ie. batch processing).
> How common are these applications? The applications that
> actually 'require' 1000 live and conncurrent connections?
>

They are quite common in large organisations. Also the growing internet business
creates a new kind of applications which are quite under stress.

> If the 'real' connection need is beyond 150 I strongly suggest
> looking at a middleware solution.
>

seconded.

> - Better raw connection performance?

I dont think that there can be done very much.

> - Better documentation/education on how to design an application
> so that it utilizes the servers resources most appropriately?
> Meaning instruction against idle connections.

As always: Don't count on that, as our goal always should be to prevent people to
hurt themselves. You would need no coast guard if it was enough to write some
instructions on a signboard.

> - Develop a connection manager type of application that is cross
> platform and specific to InterBase. Perhaps the Guardian
> could be utilized for something like this?

This sounds good for some large apps.

> - Enhance the servers ability to manage large numbers of idle
> connections.

Dont think this is needed as those idle connections dont give any performance
problems, besides using some memory


Benny


[Non-text portions of this message have been removed]