Subject | Re: User name SYSDBA |
---|---|
Author | johnson_dave2003 |
Post date | 2005-08-31T02:50:19Z |
--- In Firebird-Architect@yahoogroups.com, Jim Starkey <jas@n...> wrote:
that they can support. 63k ports (64k - the first 1k reserved) is not
sufficient in many cases. Firebird's 1024 connections is higher than
many commercial systems, but it still demands connection pooling when
you are talking about 8k concurrent busy users in a 24x7 operation,
plus twice the amount of work that the users are doing in automated
business processes.
A fully integrated mid-tier can act as a multiplier, but so far no
such beast exists that I am aware of. With current technologies,
connection pools are a requirement if you are to support even a few
thousand concurrent users.
Divorcing the user from the connection recognizes the limitation of
current connectivity options, allows for seamless plugin with existing
technologies, and allows for forward migration by configuration
if/when the sockets connection ceiling is lifted.
of my occupational work is in DB2, which has cached compiles, so it is
a service I presume upon. They offer part of a solution, but they do
not resolve all of the problems.
Let me give you some crude system parameters from two very different
but real companies. From these, we can do some math and identify some
minimum metrics for the connection model's capabilities.
Large system:
Peak transactions per hour: 1,200,000
Peak live users: 6,000
Peak transactions by live users: 400,000 / hour
Peak transactions by automated agencies: 800,000 / hour
Live user transactions are variable by time of day, automated agency
transactions are fairly constant throughout the day and involve a mix
of processes internal to and external to the company.
Live user transactions include web transactions, in which you must
allow for users to simply abandon the connection, and client/server
transactions in which the application has full control of the
transactional cycle.
Very Large System:
Peak transactions per hour: 39,000,000
Peak live users: 540,000
Peak transactions by live users: 13,000,000 / hour
Peak transactions by automated agencies: 26,000,000 / hour
Live user transactions are variable by time of day, automated agency
transactions are fairly constant throughout the day and involve a mix
of processes internal to and external to the company.
Live user transactions include remote transactions, in which you must
allow for line failures which will cause the connection to become
abandoned, and local client/server and CICS transactions in which the
application has full control of the transactional cycle. At peak
times, for about one week once per year, this company will saturate a
fiber optic trunk line that would normally service an entire city.
This scenario is incomplete, as it does not allow for specialized
sub-systems including image storage and cross reference systems, RFID
systems, inventory management, asset management, etc. This company's
(very good) profitability is largely predicated on a highly
centralized business model with real-time information for rapid
decison support.
How would we be able to support these companies requirements without
connection pooling? In the first case, bumping the ceiling of the
number of concurrent connections to match the number of available
ports should suffice, provided the OS overheads of handling that many
open conenctions did not eat you alive.
In the second case, we have a problem. The number of concurrent live
users is almost 10 times the number of available ports to connect on.
On top of that, you will have automated processes that also have to
connect to the DBMS.
> johnson_dave2003 wrote:Most large scale systems are bound by the number of open connections
>
> >In micro-scaled systems, the nominal cost of making and breaking
> >connections is just that - nominal. In even upper end small scale
> >systems, it is a serious bottleneck. When you add the overheads of
> >preparing queries to every connection, it becomes quite serious,
> >particularly on wIntel type hardware.
> >
> >
> It needn't be a bottleneck and certainly shouldn't be presumed to be a
> necessary bottleneck. Connection pools are a poor workaround for badly
> architectured or implemented database system. Connection pools are
> where bugs hang out looking for victims...
>
that they can support. 63k ports (64k - the first 1k reserved) is not
sufficient in many cases. Firebird's 1024 connections is higher than
many commercial systems, but it still demands connection pooling when
you are talking about 8k concurrent busy users in a 24x7 operation,
plus twice the amount of work that the users are doing in automated
business processes.
A fully integrated mid-tier can act as a multiplier, but so far no
such beast exists that I am aware of. With current technologies,
connection pools are a requirement if you are to support even a few
thousand concurrent users.
Divorcing the user from the connection recognizes the limitation of
current connectivity options, allows for seamless plugin with existing
technologies, and allows for forward migration by configuration
if/when the sockets connection ceiling is lifted.
> >Cached compiles are something I had presumed were already there - most
> >When you post to yahoo via browser, the server requests the cookie
> >and uses that to provide the illusion of a seamless dedicated
> >connection. They achieve the responsiveness that they do because
> >every submission does not have to establish a new connection - the pp
> >confirms that you have authority to do what is being asked, then it
> >calls prepared statements that has been prepared for over a month to
> >do the work you are requesting.
> >
> >
> Cached compiled statements are better way to do this.
of my occupational work is in DB2, which has cached compiles, so it is
a service I presume upon. They offer part of a solution, but they do
not resolve all of the problems.
Let me give you some crude system parameters from two very different
but real companies. From these, we can do some math and identify some
minimum metrics for the connection model's capabilities.
Large system:
Peak transactions per hour: 1,200,000
Peak live users: 6,000
Peak transactions by live users: 400,000 / hour
Peak transactions by automated agencies: 800,000 / hour
Live user transactions are variable by time of day, automated agency
transactions are fairly constant throughout the day and involve a mix
of processes internal to and external to the company.
Live user transactions include web transactions, in which you must
allow for users to simply abandon the connection, and client/server
transactions in which the application has full control of the
transactional cycle.
Very Large System:
Peak transactions per hour: 39,000,000
Peak live users: 540,000
Peak transactions by live users: 13,000,000 / hour
Peak transactions by automated agencies: 26,000,000 / hour
Live user transactions are variable by time of day, automated agency
transactions are fairly constant throughout the day and involve a mix
of processes internal to and external to the company.
Live user transactions include remote transactions, in which you must
allow for line failures which will cause the connection to become
abandoned, and local client/server and CICS transactions in which the
application has full control of the transactional cycle. At peak
times, for about one week once per year, this company will saturate a
fiber optic trunk line that would normally service an entire city.
This scenario is incomplete, as it does not allow for specialized
sub-systems including image storage and cross reference systems, RFID
systems, inventory management, asset management, etc. This company's
(very good) profitability is largely predicated on a highly
centralized business model with real-time information for rapid
decison support.
How would we be able to support these companies requirements without
connection pooling? In the first case, bumping the ceiling of the
number of concurrent connections to match the number of available
ports should suffice, provided the OS overheads of handling that many
open conenctions did not eat you alive.
In the second case, we have a problem. The number of concurrent live
users is almost 10 times the number of available ports to connect on.
On top of that, you will have automated processes that also have to
connect to the DBMS.