Subject Odp: [firebird-support] Firebird 2.5.3 SS Tuning

Your big problem is low RAM with 32bit os an FB. Esspecially your sweep can take very very long time if some of your index is bigger than avaiable RAM. And yes upgrade to new server with minimum 16 GB RAM. Your DB is 50GB then the best will be 64.

Then backup your db an restore with 16k page and increase your dbbuffers (value in pages)
You can calc this as dbpagesize*dbbuffers=bytes used for cache

I think that your stat query will run in few minutes instead of houers

I made the same for my db server and it work like rocket ;-)

Karol Bieniaszewski
Sorry for lang errors

----- Reply message -----
Od: "Tiziano tmdeveloper@... [firebird-support]" <>
Do: <>
Temat: [firebird-support] Firebird 2.5.3 SS Tuning
Data: czw., wrz 18, 2014 18:06


Hello, this is my first topic, I have a complex (for me) configuration and I hope that I'll explain correctly the problem (English is not my native language, please excuse me for grammatical or typing errors).

  I have a virtual server with this specs: Windows Server 2008 Standard (Build: 6002 - SP2) with 4Gb RAM, HD1(Primary): 500Gb, HD2(Data): 1,4Tb. 4CPU (Intel Xeon 2.40Ghz).
  On this server there is a Firebird 2.5.3 SuperServer (firebird.conf default by installation) with two DB about 40/50Gb size each.
I must migrate from a "set of applications" to a "new set". During the migration both set of applications must works, as the customer asked (parallel time testing).
- With remote desktop every customer open a win32 application, this application open a connection to db (one connection, one transaction, a lot of statement);
- Windows Task Scheduler runs others win32 applications that import/export a lot of data (this happen usually during night, sometimes during the day);
- On another virtual server there is an Access db (yes, MS Access) that is connected by "external table" (ODBC) on a view in Fb (on both db). During the night scheduler on this server open db Access and get data from Fb (about EIGHT processing hours for both db) for statistics.
- Web Application, on a new virtual server, connect to db (one connection for each session);
- About 30 WCF Clients connects every X seconds (configurable, now is 5sec) to a WCF Server (on the same new virtual server above). WCF Server opens a connection to db .PerSession (I'm currently investigate on it), makes few operations (some select, insert, update. Few deletes) and closes connection;
- Others applications/task are the same as above (in the meantime, I'm thinking about how to reduce the execution time of Ms Access process).

First test was: <<switch on WCF Service>> and...db server "crash" (I must restart server db). I found this errors on firebird.log:
- a lot of <<INET/inet_error: read errno = 10054 and INET/inet_error: read errno = 10053>>
- some <<Operating system call _beginthreadex failed. Error code 8>> before "crash"

So I think there are two different problems:
1. WCF connection (Open, Close, Release) is not well configured (I'm working on it);
2. Firebird Server with default config is not designed for this "applications schema/access type";

For 2nd problem I have found a lot of informations on web (also into this group) but I have some doubt about correct resolution.
So, questions are:
1. SuperServer: max ram is 2Gb (32bit OS limitation), right? How I can calculate/estimate max number of connections before server reaches this limit? I think it is connected with db/fb configuration but I don't understand calc (and what I have to change to "solve" multiple connections at same time or how I can "free" cache). Both db have Page Size 4096 and Page Buffers 2048.
2. SuperServer or SuperClassic? In some forums/articles/manuals I found that SuperClassic on a 32bit OS is not recommended because "On 32-bits systems, SuperClassic will be the first to run out of memory under high load". But currently, see second error on log above, server run out of memory anyway (I have the possibility to switch in a 64bit OS but I must be sure that this change solves the problem).
3. SWEEP: I don't understand if sweep is connected with cache (I think yes), actually interval is set as default and, in addiction to automatic sweep, I added a nightly scheduled task that force sweep and runs a simple query (select from rdb$databases + commit). Question is: low or high sweep interval change db performance? As I understood, low interval increase open transaction performance (and reduce cache?)  but sweep process slows down server performance so a lower interval (increase number of automatic sweep during day) increases "new transaction" but reduces "server performance", right? There is a method for calculating the right sweep interval?

Thank you in advance to all,