Subject | Re: [firebird-support] Re: Oldest Snapshot and Performance Issues |
---|---|
Author | Helen Borrie |
Post date | 2005-10-26T02:45:19Z |
At 01:46 AM 26/10/2005 +0000, you wrote:
caches make unavailable to the server processes. Since you have a peak of
~450 server processes running, starving them of physical RAM is not
performance-friendly. More RAM --> faster performance on memory-intensive
tasks. Database engines love RAM.
2) One user <---> one connection means that the same cache is available to
that user for all operations. On Classic, one connection doesn't know
about other connections' page caches.
recently used pages and indexes, that's all. So, repeat queries on the
same sets can use the cache if there's something useful there and avoid
some disk reads. So - the page cache has a seriously important role to play
in reducing disk i/o; but not important enough to justify starving the
server processes of RAM.
You don't seem to attach much value to the idea of testing a variety of
cache settings under load before committing your production users to using
them. Yet you have messed about radically with the default setting of 75
pages, apparently without lab-testing the effects on performance. Seems a
bit odd to me....
./heLen
> > Possibly find some way to bring the number of users and the number of1) it would reduce the amount of physical RAM that your over-sized page
> > connections closer together? Unlike Superserver (which shares page
>cache
> > dynamically) Classic allocates a static lump of page cache for each
> > connection. You have an unusually high cache allocation set (4K * 1000
> > pages, or 4 Mb per connection). Currently your peak time load has
>1.8 Gb
> > of RAM tied up in page caches. Was this 1000 pages decision based
>on any
> > load/test metrics, or was it set on the assumption that "more is
>better"?
>
>The multiple connections are related to legacy applications which we
>are slowly removing. Why will bringing down the connection count
>improve performance?
caches make unavailable to the server processes. Since you have a peak of
~450 server processes running, starving them of physical RAM is not
performance-friendly. More RAM --> faster performance on memory-intensive
tasks. Database engines love RAM.
2) One user <---> one connection means that the same cache is available to
that user for all operations. On Classic, one connection doesn't know
about other connections' page caches.
>At peak periods we are only using 5Gb of the 8Gb currently availableAre you sure you understand what the page caches are used for? They store
>on the server. So we know we are not swapping to disk. It is
>interesting that you say 1000 is a high setting for the buffers as we
>had been thinking how low it was since it is only 20% of available
>memory and about 90% of our queries are indexed retrievals.
recently used pages and indexes, that's all. So, repeat queries on the
same sets can use the cache if there's something useful there and avoid
some disk reads. So - the page cache has a seriously important role to play
in reducing disk i/o; but not important enough to justify starving the
server processes of RAM.
You don't seem to attach much value to the idea of testing a variety of
cache settings under load before committing your production users to using
them. Yet you have messed about radically with the default setting of 75
pages, apparently without lab-testing the effects on performance. Seems a
bit odd to me....
./heLen