Subject Re: [IB-Architect] Interbase connection limit and Support relatedProblems
Author Jason Chapman

I seem inept at getting my exact point accross - I don't envy those poor
people that have to work for me!

> >Architecturally - I can't see why 500 users from one box is not feasable.
> >
> >I need to hear from Ann, Paul, Jim, Charlie about time to fix.
> >
> This isn't a question of fixing a bug or dropping in a feature, but
> a fairly major architectural change.
I was refering to IB on NT & Charlie's statement about IB NT 256
connections. This crossed with the response from you and Paul RE: It's
fixable with a small mod.

This centres around my own problem on 250+ users on NT.

> The problem is simple to describe: Due to either bad application design,
> a very large number of simultaneous users, or both, the number of
> connections required exceeds the limits of the underlaying operating
> system.
The latter of course - It is due to good app design and a good DB that I can
get extremly good performance with this number of concurrent call centre &
accounts staff all hitting the server.

> There are two questions here. The immediate question is what can
> be done in InterBase to rescue those unfortunate soles who didn't
> design their application to their operation environment
OK, bearing in mind you are answering this within the context of the Sun
thread, but for me I was happy with NT's capability for numerous

>. The more
> interest question is where is application architecture going, and
> how should InterBase react.
Not to the unsuspecting who are in the situation now, but I know what you

> A more radical approach is to allow an dormant connection (pick a
> metric) to break the TCP connection to the server, reestablishing
> itself as required.
As long as it can happily keep transactions alive during this virtual

>This scales better at the cost of a little
> more work, and if the server can manage the connection pool, everybody
> can stay connected until a limit is approached. The hard part is
> a server heuristic to decide when a client has croaked. A periodic
> keep-alive message -- a little wasteful of resources -- could do the
> trick, but there is always the danger that a clogged network could
> result in the loss of an important connection.
Doesn't IB already does this with a 60 - 120 second ping of the client?
This appears to be dynamically timed at the moment & could be further
enhanced by a configurable but hidden variable.

>Another problem
> is event delivery. All in all, however, this is probably the best
> solution.
Could each client have a dedicated message pending client socket?
It would need this socket for the ping as well.