Subject Re: Oldest Snapshot and Performance Issues
Author Greg Kay
--- In firebird-support@yahoogroups.com, Helen Borrie <helebor@t...>
wrote:
>
> At 01:46 AM 26/10/2005 +0000, you wrote:
>
> > > Possibly find some way to bring the number of users and the
number of
> > > connections closer together? Unlike Superserver (which shares page
> >cache
> > > dynamically) Classic allocates a static lump of page cache for each
> > > connection. You have an unusually high cache allocation set (4K
* 1000
> > > pages, or 4 Mb per connection). Currently your peak time load has
> >1.8 Gb
> > > of RAM tied up in page caches. Was this 1000 pages decision based
> >on any
> > > load/test metrics, or was it set on the assumption that "more is
> >better"?
> >
> >The multiple connections are related to legacy applications which we
> >are slowly removing. Why will bringing down the connection count
> >improve performance?
>
> 1) it would reduce the amount of physical RAM that your over-sized page
> caches make unavailable to the server processes. Since you have a
peak of
> ~450 server processes running, starving them of physical RAM is not
> performance-friendly. More RAM --> faster performance on
memory-intensive
> tasks. Database engines love RAM.
>

Yes, but I don't understand how we are starving the server of ram when
we never use more than 70% of the ram available?

> 2) One user <---> one connection means that the same cache is
available to
> that user for all operations. On Classic, one connection doesn't know
> about other connections' page caches.
>

Yes this makes senses but most of our users are accessing different
records (with the exception of the system tables).

>
> >At peak periods we are only using 5Gb of the 8Gb currently available
> >on the server. So we know we are not swapping to disk. It is
> >interesting that you say 1000 is a high setting for the buffers as we
> >had been thinking how low it was since it is only 20% of available
> >memory and about 90% of our queries are indexed retrievals.
>
> Are you sure you understand what the page caches are used for? They
store
> recently used pages and indexes, that's all. So, repeat queries on the
> same sets can use the cache if there's something useful there and avoid
> some disk reads. So - the page cache has a seriously important role
to play
> in reducing disk i/o; but not important enough to justify starving the
> server processes of RAM.
>

In our system a user will often query the same information 2 or 3
times before moving on to their next piece of work. A large number of
the queries would access a couple of meg of information which the user
then does analysis on. The reason we do this is related to our audit
requirements.

> You don't seem to attach much value to the idea of testing a variety of
> cache settings under load before committing your production users to
using
> them. Yet you have messed about radically with the default setting
of 75
> pages, apparently without lab-testing the effects on performance.
Seems a
> bit odd to me....
>
Sorry, I left out some information. When we switched from SuperServer
to Classic we wrote programs that tested performance of changes to
settings on our development and test environments before releasing the
changes to production. We probably spent a month switching from
SuperServer to Classic and playing around with the settings before we
came up with the ones we are currently using. What I'm not sure of is
a way to accurately test a production environment like ours in firebird.

With some of our bigger reports we noticed large performance issues
when we changed from superserver (with a page buffer of 10000) to
classic (with a page buffer of 1000). The aim is to move these large
reports onto a separate server which is kept up to date with replication.

When setting the value to 1,000 we did some research including
checking "The Firebird Book". One suggestion on page 245 is to try a
cache setting of two-thirds of available free memory which would have
us using a buffer setting of about 3,000.

> ./heLen
>

Greg