Subject | Re: What's the upper limit on LockHashSlots? |
---|---|
Author | Stephen J. Friedl |
Post date | 2007-01-03T20:29:37Z |
--- In firebird-support@yahoogroups.com, "Ann W. Harrison"
<aharrison@...> wrote:
So, under what circumstances would I *not* want to max out the hash
slots to 2039? The hash computation doesn't appear to be much more
than an integer modulo, so it's not like larger moduli take longer to
compute.
Assuming that RAM is not an issue for us, the only thing I can imagine
here is that any process that had to walk the whole set of hash slots
would take longer during low-use situations -- it would have to hit
2000 slots instead of 100 to find out that there is nothing to do --
but once the lock table gets fuller, it would all be dominated by the
locks that actually exist. But this is all speculation.
Are there even *corner cases* where this would be sup-optimal?
Since (in this application) one database file (say, CL_123.gdb) could
be open with multiple fb_inet_server processes, doesn't a larger cache
just mean more coherency issues anyway? I had done some playing with
larger cache sizes, but since they are sadly not shared in the Classic
server, it was not clear that this was going to help much.
Thinking about this: a larger cache would benefit when there is just
one server process opening a file, but a smaller cache would benefit
when there's more than one.
And in any case, a smaller cache would leave more RAM for the
*filesystem* buffer cache, which is properly shared and doesn't have
those coherency issues.
I feel like a voyeur looking into the bowels of the database this way :-)
Thanks for the helpful insight.
Steve
---
Steve Friedl | UNIX Wizard | Microsoft MVP | www.unixwiz.net
<aharrison@...> wrote:
> If you've got chains that long, I'd max out the the slots.This is as I thought.
So, under what circumstances would I *not* want to max out the hash
slots to 2039? The hash computation doesn't appear to be much more
than an integer modulo, so it's not like larger moduli take longer to
compute.
Assuming that RAM is not an issue for us, the only thing I can imagine
here is that any process that had to walk the whole set of hash slots
would take longer during low-use situations -- it would have to hit
2000 slots instead of 100 to find out that there is nothing to do --
but once the lock table gets fuller, it would all be dominated by the
locks that actually exist. But this is all speculation.
Are there even *corner cases* where this would be sup-optimal?
> I'd also try throttling back a bit on the size of the individualHmmm, this is curious, and was what I was going to ask about anyway.
> caches, since every page in cache is lock.
Since (in this application) one database file (say, CL_123.gdb) could
be open with multiple fb_inet_server processes, doesn't a larger cache
just mean more coherency issues anyway? I had done some playing with
larger cache sizes, but since they are sadly not shared in the Classic
server, it was not clear that this was going to help much.
Thinking about this: a larger cache would benefit when there is just
one server process opening a file, but a smaller cache would benefit
when there's more than one.
And in any case, a smaller cache would leave more RAM for the
*filesystem* buffer cache, which is properly shared and doesn't have
those coherency issues.
I feel like a voyeur looking into the bowels of the database this way :-)
Thanks for the helpful insight.
Steve
---
Steve Friedl | UNIX Wizard | Microsoft MVP | www.unixwiz.net