Subject | Re: [IB-Architect] Connection resources, classic vs. SS |
---|---|
Author | Jim Starkey |
Post date | 2001-02-26T21:17:38Z |
At 01:51 PM 2/26/01 -0700, Jason Wharton wrote:
ratched up backed on client and optimizer recommendations. But once
the cache is big enough to hold all frequently referenced pages, making
it bigger doesn't help much. There is also an observed but
non-diagnosed bug with very large cache sizes (decreasing cache
size results in markedly better performance). Large cache sizes
interact with the OS paging algorithm and the OS file cache, so
anlysis cannot be done abstractly.
In the super server there is one page cache per database, shared
by all connections. The actually connect doesn't consume enough
memory to waste talking about. It does hold down a socket, a
reasonably scarce resource.
The best way to use a (non-Netfrastructure) database with with an
app server that maintains a single database connection shared
among all clients. If you're app server architecture doesn't
support this, find a different architecture for Firebird.
In a better world, it is probably better to make a connection,
do a piece of work, and break the connection so fast that
the hassle of caching connections isn't worth it. Firebird
ain't there.
give advice (hmmm, that does generalize, doesn't it?).
to get a lot happier.
engine builds the data structures necessary to do what is asked of
it and leaves them around for re-use. It should do more of this.
Recompiling the same query 2000 times a day gets tedious.
40 Beach St, Manchester, Ma 01944 Att: Jim.
super server will not scale. Algorithms that make sense in one
are dumb in the other. Compromises suck. Conditional code sucks
more.
[The opinions expressed here are not necessarily those of my beloved
spouse but are never-the-less correct.]
Jim Starkey
>The size of the page cache defaults when the engine starts and can
>I've seen that option but never could get to grips with how it impacted
>memory allocation on the whole. Most confusing is that we have a machine
>that hosts about 8 different databases on it which are connected to from an
>assortment of about 30 or more applications. Many of which are browser
>clients (via CGI & ISAPI) and some apps which maintain persistent
>connections during the work day.
>
ratched up backed on client and optimizer recommendations. But once
the cache is big enough to hold all frequently referenced pages, making
it bigger doesn't help much. There is also an observed but
non-diagnosed bug with very large cache sizes (decreasing cache
size results in markedly better performance). Large cache sizes
interact with the OS paging algorithm and the OS file cache, so
anlysis cannot be done abstractly.
In the super server there is one page cache per database, shared
by all connections. The actually connect doesn't consume enough
memory to waste talking about. It does hold down a socket, a
reasonably scarce resource.
The best way to use a (non-Netfrastructure) database with with an
app server that maintains a single database connection shared
among all clients. If you're app server architecture doesn't
support this, find a different architecture for Firebird.
In a better world, it is probably better to make a connection,
do a piece of work, and break the connection so fast that
the hassle of caching connections isn't worth it. Firebird
ain't there.
>What if I increased the buffers setting on all of the apps? Is it somethingDon't give advice you don't want taken. If you don't know, don't
>that is scalar/cumulative or does the server get smart somewhere along the
>way and ignore what I am telling it?
>
give advice (hmmm, that does generalize, doesn't it?).
>InterBase doesn't appear to benefit from the amount of memory that ourIt got happier that a pig in mud about 8 mb. It just isn't going
>server has on it. It has a 1GB of RAM and the highest amount of memory the
>task manager on NT has indicated ibserver using is under 20MB. Can you
>explain this to me some? Do I even want it to use more than that?
>
to get a lot happier.
>> Sorry, no. Memory is allocated as needed and rarely released. TheYou were misinformed. You efforts will continue to be futile. The
>> amount of memory needed to related to things like the number and
>> complexity (number of fields, indexes, triggers, virtual expressions,
>> validation expressions, etc etc etc) referenced. There is no control
>> over these things.
>
>That is where I am at. I remember some sessions a BORCON where I was told
>how I could control this to some extent but my attempts proved futile.
>
engine builds the data structures necessary to do what is asked of
it and leaves them around for re-use. It should do more of this.
Recompiling the same query 2000 times a day gets tedious.
>>Sorry. Send the memory to me. Netfrastructure knows how to use it.
>> Firebird was designed to run on machines that are now used to drive
>> parking meters. The resource allocation problem in Firebird is that
>> it doesn't cache enough stuff, like compiled requests.
>
>When you have 1GB of ram this is a very frustrating point.
>
40 Beach St, Manchester, Ma 01944 Att: Jim.
>>My very own humble opinion is that until Firebird ditches classic,
>> Start learning. The rules change cross platform and time. None of
>> the limitations are obvious or even stated.
>>
>> A harder questions is whether it worth worrying about. If the browser
>> becomes the dominant platform, the number of active connections becomes
>> a non-issue, and the gating factor moves somewhere else.
>
>I'm one of the knuckle heads that will be writing technologies that enable
>these browser clients, so I don't have the luxury of "not worrying about
>it". These people are my customers and my job is to help them to "not worry
>about it".
>
>I am currently at the API level and so I am wondering how much InterBase
>will be able to scale in its future revisions. If the direct interface to
>the server is going to maintain its limitation to only handle up to 200-300
>concurrent connections then I am forced to investigate other technologies to
>sit between the client and the server to mutex the load.
>
super server will not scale. Algorithms that make sense in one
are dumb in the other. Compromises suck. Conditional code sucks
more.
>In short, I am strategizing what to do for customers who have writtenInterbase, not a chance. Firebird, a chance.
>applications with IBO that directly access the API and at the same time
>would like to scale above 300 concurrent users. Is IB going to give me that
>in time or am I going to have to write a layer that will accomplish this
>with the given constraints.
>
[The opinions expressed here are not necessarily those of my beloved
spouse but are never-the-less correct.]
Jim Starkey