Subject | Re: [firebird-support] Classic vs Superserver |
---|---|
Author | Alexandre Benson Smith |
Post date | 2013-08-29T17:31:08Z |
Em 29/8/2013 12:17, Tim Ward escreveu:
an over simplification, I don't know FB internals, but the threads does
not run in parallel (FB 3.0 will fix that). If you have a multi core
server (wich is an obvious thing theses days) you should prefer Classic
Server, the only case where I think SuperServer will be a choice is when
you have just one connection per database.
Perhaps you have automatic sweep disabled (check with gstat -h), if you
have sweep disabled the garbage will acumulate, so when a query need to
"scan" the table it will pay the cost to clean it up. I am not saying to
use automatic sweep, since it could trigger in the middle of the day
generating an "unknown" slowdown... What I suggest is that you keep
automatic sweep disabled and run a manual sweep (gfix -sweep) during off
peak hours.
I will not worry about the shared cache, the file system cache will do
almost the same as the shared cacee in SS will do, you will have a small
memory overhead because of the separed caches for each connection, even
if will set default cache to 1000 you will have 8MB or 16MB of cache
duplication per connection, not that much...
see you !
> But I thought Superserver used threads? And threads can run onThere are threads, but in fact they are "serialized", perhaps it's just
> separate CPUs? (Processes are an address space thing, not a CPU thing.)
an over simplification, I don't know FB internals, but the threads does
not run in parallel (FB 3.0 will fix that). If you have a multi core
server (wich is an obvious thing theses days) you should prefer Classic
Server, the only case where I think SuperServer will be a choice is when
you have just one connection per database.
Perhaps you have automatic sweep disabled (check with gstat -h), if you
have sweep disabled the garbage will acumulate, so when a query need to
"scan" the table it will pay the cost to clean it up. I am not saying to
use automatic sweep, since it could trigger in the middle of the day
generating an "unknown" slowdown... What I suggest is that you keep
automatic sweep disabled and run a manual sweep (gfix -sweep) during off
peak hours.
I will not worry about the shared cache, the file system cache will do
almost the same as the shared cacee in SS will do, you will have a small
memory overhead because of the separed caches for each connection, even
if will set default cache to 1000 you will have 8MB or 16MB of cache
duplication per connection, not that much...
see you !