Subject Re: [IB-Architect] Connection resources, classic vs. SS
Author Jason Wharton

> At 12:58 PM 2/26/01 -0700, Jason Wharton wrote:
> >>
> >> Actually, it's not. The size of the page cache is settable, which
> >> removes the page size as a factor.
> >
> >How is this accomplished?
> >
> Database parameter block.

I've seen that option but never could get to grips with how it impacted
memory allocation on the whole. Most confusing is that we have a machine
that hosts about 8 different databases on it which are connected to from an
assortment of about 30 or more applications. Many of which are browser
clients (via CGI & ISAPI) and some apps which maintain persistent
connections during the work day.

What if I increased the buffers setting on all of the apps? Is it something
that is scalar/cumulative or does the server get smart somewhere along the
way and ignore what I am telling it?

InterBase doesn't appear to benefit from the amount of memory that our
server has on it. It has a 1GB of RAM and the highest amount of memory the
task manager on NT has indicated ibserver using is under 20MB. Can you
explain this to me some? Do I even want it to use more than that?

> Also feedback from the optimizer (probably
> obsolete since the default number of buffers is probably more than
> the optimizer would ever ask for).

Over my head unfortunately... Gotta get in the code more I suppose.

> >> Connections are one factor, as
> >> are metadata size and complexity, request size, complexity and duration
> >> (i.e. when requests are unprepared), and for superserver, lock table
> >> size, and for superserver, the number of connected databases.
> >
> >I feel like this is obscuring things a little and mine is over
> >things a little. Is there a chance we can meet in the middle for the sake
> >a productive discussion?
> >
> Sorry, no. Memory is allocated as needed and rarely released. The
> amount of memory needed to related to things like the number and
> complexity (number of fields, indexes, triggers, virtual expressions,
> validation expressions, etc etc etc) referenced. There is no control
> over these things.

That is where I am at. I remember some sessions a BORCON where I was told
how I could control this to some extent but my attempts proved futile.

> >>
> >> Well, maybe yes, maybe no. Depends on how the operating system handles
> >> a large number of processes.
> >
> >What kind of hardware and OS are we talking about here? For the sake of
> >discussion can we simply assume typical hardware that most people are
> >to be using? Even better, low end worst case scenario hardware?
> >
> Firebird was designed to run on machines that are now used to drive
> parking meters. The resource allocation problem in Firebird is that
> it doesn't cache enough stuff, like compiled requests.

When you have 1GB of ram this is a very frustrating point.

> >>
> >> Every connection takes some amount of resources - I really don't
> >> know at the moment how much, but non-zero for sure. Each transaction
> >> also takes some amount of resources.
> >
> >Right, that is a given, to some extent. I'm wanting to get a feel for how
> >much the SS architecture can be made to ease the hit of many more
> >connections than it presently allows. I feel like it would be much more
> >possible to move in this direction with the SS architecture than the
> >classic. I'm talking like thousands of concurrent users... Is it even
> >possible or would some sort of server array to pool the external
> >have to be implemented to accomplish that? I know almost squat about
> >and limitations on how many pipes a network can have open at any given
> >
> Start learning. The rules change cross platform and time. None of
> the limitations are obvious or even stated.
> A harder questions is whether it worth worrying about. If the browser
> becomes the dominant platform, the number of active connections becomes
> a non-issue, and the gating factor moves somewhere else.

I'm one of the knuckle heads that will be writing technologies that enable
these browser clients, so I don't have the luxury of "not worrying about
it". These people are my customers and my job is to help them to "not worry
about it".

I am currently at the API level and so I am wondering how much InterBase
will be able to scale in its future revisions. If the direct interface to
the server is going to maintain its limitation to only handle up to 200-300
concurrent connections then I am forced to investigate other technologies to
sit between the client and the server to mutex the load.

In short, I am strategizing what to do for customers who have written
applications with IBO that directly access the API and at the same time
would like to scale above 300 concurrent users. Is IB going to give me that
in time or am I going to have to write a layer that will accomplish this
with the given constraints.

> >It is obvious I am going to have to do some homework on this. Could you
> >me a brief description of the lock table, what role it plays, and how it
> >interacts as connections come and go?
> >
> The issue has been discussed at length. Start with the archives.

Will do.

Jason Wharton
CPS - Mesa AZ