Subject Re: [firebird-support] Re: Performance lost with lots of connections
Author Anderson Farias

----- Original Message -----
From: "selensky" <selensky@...>

> We use just 2 SAS drives mirrored in RAID.

not bad, but if you could get one or two SATA drives to install the system
and leave the SAS drives to the database alone, than I *think* would be a
better setup.

> What do you mean temporary files? Firebird temporary files?


> Would it help to have them in a separate drive? Why not use a RAM drive?

yes, a RAM drive is a better way to go

BTW, looks like FB 2+ almost don't have more of this temp files...

|My page buffers are set to 10 000!!! I have set it up when I was
|still using Firebird Superserver. Is this bad with classic? I have
|lots of ram and it has never been fully used so far.

well, if you're right then (lets say you have a 4k page size db) then each
connection (fb process) will get about 40M of RAM for cache. depending on
how many concurrent connections you have and the amount of RAM, you can be
ok or in trouble -- unlike SuperServer, Classic has a separate cache for
each process.

> Are you saying that even with all these processors, when a slow
> query is being executed it slows down everybody else?

no -- the 'batch processes' I mention were a lot more than just 'selects',
they deleted, updated and inserted a lot of records. I think the update part
was really the 'killer' since it was 'blocking' records making other
thansactions to wait.

this "thing" we did was so insane the 4 processors went 100% when it was
running ('couse it was not 1 but 3-5 workstations running it
concurrently) -- no DBMS can survive bad programming .. lol ;-)

BTW, with Classic it seems you don't have this server slowdown 'efect' by
having 1 slow query -- but you do have it with SS.

>Please tell me which configuration parameter helped a lot, I am
>really interested! Would it be possible to send your firebird.conf

we raised LockMemSize to something like 1 ou 2 MB (it's just the starting
size on Classic anyway), and -- more important -- LockHashSlots to some
prime value right after 500 (501?)

I don't have the actual values and can't get then (the conf file) right now,
but it's not a big deal.

we found we needed to change this params by looking at the stats reported by
fb_lock_print. you may find the messages on this group where Ann/Helen and
others helped me to get through this -- shearch for fb_lock_print. if you
can't find more details, let me know.

> Hoes does the gap affect performance? Where can I read about it (I
> know what it is, but I am not sure why it affects performance badly)

Well, the more old transactions kept the more record versions to go througt.
I'm sure others can respond to you better than I can... =) this article may
be of interest:

|Is there a better way to kill such a process?

not that I know of

|How about SuperServer, where you only have one EXE, how do you kill a
specific connection?

You don't (at least not with FB 1.5). but AFAIK you don't have to care
about, SS will handle for you -- however, SS is not an option on a SMP box
running Windows.

|If it stays on forever it might even keep a transaction open which
|will cause the gap between OAT and OIT to increase and increase....

It can happen with Classic, but don't think it's true for SS