Subject Re: [IB-Architect] Connection resources, classic vs. SS
Author Ann W. Harrison
At 12:13 PM 2/26/2001 -0700, Jason Wharton wrote:
>I am aware that memory allocated to a server process is directly
>proportional to the page size and number of connections to a database.

Actually, it's not. The size of the page cache is settable, which
removes the page size as a factor. Connections are one factor, as
are metadata size and complexity, request size, complexity and duration
(i.e. when requests are unprepared), and for superserver, lock table
size, and for superserver, the number of connected databases.

>the classic architecture this makes sense as each connection ends up being
>serviced by a separate process. This also explains why this architecture
>doesn't feasibly scale to more than 256 concurrent users.

Well, maybe yes, maybe no. Depends on how the operating system handles
a large number of processes.

>But, in the super-server architecture, connections are actually serviced by
>a single process such that allocated resources are shared in common rather
>than separated across process boundaries. What I am wondering is if the
>resource allocation parameters are still the same or if there is a
>difference such that new connections will cap out and not add additional
>resource allocations.

No. Every connection takes some amount of resources - I really don't
know at the moment how much, but non-zero for sure. Each transaction
also takes some amount of resources.

>What I am driving at is: With going to SS is it possible that resource
>allocations can become more independent from the number of concurrent
>connections such that the number of connections can significantly increase?
>What is the next point of exhaustion that would limit the number of
>concurrent connections?

Probably the fact that the lock table in superserver is not dynamically


We have answers.