Subject Re: [IB-Architect] Connection resources, classic vs. SS
Author Jim Starkey
At 12:58 PM 2/26/01 -0700, Jason Wharton wrote:
>> Actually, it's not. The size of the page cache is settable, which
>> removes the page size as a factor.
>How is this accomplished?

Database parameter block. Also feedback from the optimizer (probably
obsolete since the default number of buffers is probably more than
the optimizer would ever ask for).

>> Connections are one factor, as
>> are metadata size and complexity, request size, complexity and duration
>> (i.e. when requests are unprepared), and for superserver, lock table
>> size, and for superserver, the number of connected databases.
>I feel like this is obscuring things a little and mine is over simplifying
>things a little. Is there a chance we can meet in the middle for the sake of
>a productive discussion?

Sorry, no. Memory is allocated as needed and rarely released. The
amount of memory needed to related to things like the number and
complexity (number of fields, indexes, triggers, virtual expressions,
validation expressions, etc etc etc) referenced. There is no control
over these things.

>> Well, maybe yes, maybe no. Depends on how the operating system handles
>> a large number of processes.
>What kind of hardware and OS are we talking about here? For the sake of
>discussion can we simply assume typical hardware that most people are going
>to be using? Even better, low end worst case scenario hardware?

Firebird was designed to run on machines that are now used to drive
parking meters. The resource allocation problem in Firebird is that
it doesn't cache enough stuff, like compiled requests.

>> Every connection takes some amount of resources - I really don't
>> know at the moment how much, but non-zero for sure. Each transaction
>> also takes some amount of resources.
>Right, that is a given, to some extent. I'm wanting to get a feel for how
>much the SS architecture can be made to ease the hit of many more concurrent
>connections than it presently allows. I feel like it would be much more
>possible to move in this direction with the SS architecture than the
>classic. I'm talking like thousands of concurrent users... Is it even
>possible or would some sort of server array to pool the external connections
>have to be implemented to accomplish that? I know almost squat about sockets
>and limitations on how many pipes a network can have open at any given time.

Start learning. The rules change cross platform and time. None of
the limitations are obvious or even stated.

A harder questions is whether it worth worrying about. If the browser
becomes the dominant platform, the number of active connections becomes
a non-issue, and the gating factor moves somewhere else.

>It is obvious I am going to have to do some homework on this. Could you give
>me a brief description of the lock table, what role it plays, and how it
>interacts as connections come and go?

The issue has been discussed at length. Start with the archives.

Jim Starkey