Subject | Re: [IB-Architect] Interbase connection limit and Support relatedProblems |
---|---|
Author | Jim Starkey |
Post date | 2000-07-06T15:14:03Z |
At 06:45 AM 7/6/00 +0100, Jason Chapman wrote:
a fairly major architectural change.
The problem is simple to describe: Due to either bad application design,
a very large number of simultaneous users, or both, the number of
connections required exceeds the limits of the underlaying operating
system.
The traditional database vendor response to simple. Don't pick an
operating system than can't stand the number of connections you
require. Or, alternatively, if you're saddled with an operation
system that is limited in capabilities, design your application
around those limitations. One might reasonably expect somebody
about to shell out a quarter of million bucks on a high end server
to ask its manufacturer if it can handle more than 255 open files
and sockets. There is a lot of truth here, but for the sake of
argument, let's put it aside.
There are two questions here. The immediate question is what can
be done in InterBase to rescue those unfortunate soles who didn't
design their application to their operation environment. The more
interest question is where is application architecture going, and
how should InterBase react.
The most straightforward solution to the Health Racket problem
(many open but inactive application programs, each with a open
connection) is a remote interface connection pooling scheme in
which database connections are split into virtual and physical
connections, and a virtual connection is attached to a physical
connection only when a transaction is active. The problem with
this is that the remote interface (appropriately) lives in the
client process context, hence there is no way to share a connection
pool across processes. The remote interface could be moved out
of the user process context at the expense of two additional
context switches per database interaction, a rather unfair performance
hit for folks who designed their applications in a responsible manner.
It does have the advantage of being isolated on the client.
An intrinsic problem with this design is that it does nothing to
address the problem of a large number of legitimate users. It's
kinda hard to userstand why somebody in the open source era is going
to invest a great deal of works to get around somebody else's bad
design.
A more radical approach is to allow an dormant connection (pick a
metric) to break the TCP connection to the server, reestablishing
itself as required. This scales better at the cost of a little
more work, and if the server can manage the connection pool, everybody
can stay connected until a limit is approached. The hard part is
a server heuristic to decide when a client has croaked. A periodic
keep-alive message -- a little wasteful of resources -- could do the
trick, but there is always the danger that a clogged network could
result in the loss of an important connection. Another problem
is event delivery. All in all, however, this is probably the best
solution.
The trend, however, is away from client/server application architectures
toward either multi-tier or web server based applications. Multi-tier
falls in all sorts of flavors depending on the weight of the client.
In all cases, however, database connections and operations are performed
on application servers. But most multi-tier architectures don't address
the connection problem -- a client need a connection to the application
server just like the client/server client needed a connection to the
database server. The same guys who designed a bad client/server application
could design an equally bad multi-tier application. Multi-tier solves
some problems, leaves others untouched, and makes some things worse (like
the need to design a robust, upgradeable application specific protocol).
Web application are a great deal more interesting. The important
characteristics are a) the client is supplied by Netscape or Uncle Bill,
and b) the relationship between client and server is connectionless.
In this world, there is absolutely no way for the server to contact
a client, so the keep-alive is out of question. An application must
be designed from ground up to work in a connectionless world and to
work well in the ergonomically challenged browser environment. But,
exploiting the browser model and capabilities, a web application can
be superior to a corresponding client/server application for everything
but high volume data entry and interactive graphics.
Given where the world is going, my druthers is to put development
resources on facilitating web application servers at the expense
of bailing out bad client/server applications. The web is here.
Think like it.
And if your $250,000 Sun can't hack the work, try a $2000 Linux box.
[The opinions expressed here are not necessary those of Ann, Paul,
Matt, or InterBase-IV.]
Jim Starkey
>This isn't a question of fixing a bug or dropping in a feature, but
>Architecturally - I can't see why 500 users from one box is not feasable.
>
>I need to hear from Ann, Paul, Jim, Charlie about time to fix.
>
a fairly major architectural change.
The problem is simple to describe: Due to either bad application design,
a very large number of simultaneous users, or both, the number of
connections required exceeds the limits of the underlaying operating
system.
The traditional database vendor response to simple. Don't pick an
operating system than can't stand the number of connections you
require. Or, alternatively, if you're saddled with an operation
system that is limited in capabilities, design your application
around those limitations. One might reasonably expect somebody
about to shell out a quarter of million bucks on a high end server
to ask its manufacturer if it can handle more than 255 open files
and sockets. There is a lot of truth here, but for the sake of
argument, let's put it aside.
There are two questions here. The immediate question is what can
be done in InterBase to rescue those unfortunate soles who didn't
design their application to their operation environment. The more
interest question is where is application architecture going, and
how should InterBase react.
The most straightforward solution to the Health Racket problem
(many open but inactive application programs, each with a open
connection) is a remote interface connection pooling scheme in
which database connections are split into virtual and physical
connections, and a virtual connection is attached to a physical
connection only when a transaction is active. The problem with
this is that the remote interface (appropriately) lives in the
client process context, hence there is no way to share a connection
pool across processes. The remote interface could be moved out
of the user process context at the expense of two additional
context switches per database interaction, a rather unfair performance
hit for folks who designed their applications in a responsible manner.
It does have the advantage of being isolated on the client.
An intrinsic problem with this design is that it does nothing to
address the problem of a large number of legitimate users. It's
kinda hard to userstand why somebody in the open source era is going
to invest a great deal of works to get around somebody else's bad
design.
A more radical approach is to allow an dormant connection (pick a
metric) to break the TCP connection to the server, reestablishing
itself as required. This scales better at the cost of a little
more work, and if the server can manage the connection pool, everybody
can stay connected until a limit is approached. The hard part is
a server heuristic to decide when a client has croaked. A periodic
keep-alive message -- a little wasteful of resources -- could do the
trick, but there is always the danger that a clogged network could
result in the loss of an important connection. Another problem
is event delivery. All in all, however, this is probably the best
solution.
The trend, however, is away from client/server application architectures
toward either multi-tier or web server based applications. Multi-tier
falls in all sorts of flavors depending on the weight of the client.
In all cases, however, database connections and operations are performed
on application servers. But most multi-tier architectures don't address
the connection problem -- a client need a connection to the application
server just like the client/server client needed a connection to the
database server. The same guys who designed a bad client/server application
could design an equally bad multi-tier application. Multi-tier solves
some problems, leaves others untouched, and makes some things worse (like
the need to design a robust, upgradeable application specific protocol).
Web application are a great deal more interesting. The important
characteristics are a) the client is supplied by Netscape or Uncle Bill,
and b) the relationship between client and server is connectionless.
In this world, there is absolutely no way for the server to contact
a client, so the keep-alive is out of question. An application must
be designed from ground up to work in a connectionless world and to
work well in the ergonomically challenged browser environment. But,
exploiting the browser model and capabilities, a web application can
be superior to a corresponding client/server application for everything
but high volume data entry and interactive graphics.
Given where the world is going, my druthers is to put development
resources on facilitating web application servers at the expense
of bailing out bad client/server applications. The web is here.
Think like it.
And if your $250,000 Sun can't hack the work, try a $2000 Linux box.
[The opinions expressed here are not necessary those of Ann, Paul,
Matt, or InterBase-IV.]
Jim Starkey