Subject | FB performance |
---|---|
Author | Marek Konitz |
Post date | 2006-01-05T21:48:52Z |
Hi,
I've database with about 10 000 000 records in main table, max 30
concurrent connections. Every client updates one record in that table
every 1-2 minutes and from time to time selects a set of data with ca 30
000 records. It run extremely slow...
The server is: p4 xeon 1,5GB RAM, Firebird 1.5.2, win 2003, database
page size 8192
When fb was installed as superserver it used about 70MB of memory, now
it's classic server and each connection consumes ca 5MB. Classic goes a
bit faster than super, but still cpu usage is about 90-100% all the time.
- I've already reviewed indices
- I don't think that users could lock each other - every client works on
separate set of records
- I know what cpu affinity is - was set to 2 (second processor) with
superserver
- select count(*) from table takes several minutes(!) /tested both in
ibexpert and application/
- client application uses ibx controls
- after backup/restore works slightly better, but only for several hours
Questions:
- How can I increase amount of memory used by fb server?
- What parameters should I change in firebird.conf?
- Are there any 'hidden' (not mentioned in comments) parameters in
firebird.conf? Any documentation about this file?
- Superserver is said to be more efficient in such implementations, so
why the classic is?
- Have ibx controls negative influence on database server? I'm using
mostly TIBSQL controls, rarely TIBQuery. Any filtering is done on the
server side and result datasets minimized as much as possible.
I've asked already about firebird performance on this forum and you were
surprised, why it's slow. I don't know if it might be bad application
design/db design/server configuration? The application isn't very
complicated, nor db structure is. Hope you can help - the number of
records will increase and number of clients double in a short time... If
you need more configuration details, just tell me.
Best Regards,
Marek Konitz
I've database with about 10 000 000 records in main table, max 30
concurrent connections. Every client updates one record in that table
every 1-2 minutes and from time to time selects a set of data with ca 30
000 records. It run extremely slow...
The server is: p4 xeon 1,5GB RAM, Firebird 1.5.2, win 2003, database
page size 8192
When fb was installed as superserver it used about 70MB of memory, now
it's classic server and each connection consumes ca 5MB. Classic goes a
bit faster than super, but still cpu usage is about 90-100% all the time.
- I've already reviewed indices
- I don't think that users could lock each other - every client works on
separate set of records
- I know what cpu affinity is - was set to 2 (second processor) with
superserver
- select count(*) from table takes several minutes(!) /tested both in
ibexpert and application/
- client application uses ibx controls
- after backup/restore works slightly better, but only for several hours
Questions:
- How can I increase amount of memory used by fb server?
- What parameters should I change in firebird.conf?
- Are there any 'hidden' (not mentioned in comments) parameters in
firebird.conf? Any documentation about this file?
- Superserver is said to be more efficient in such implementations, so
why the classic is?
- Have ibx controls negative influence on database server? I'm using
mostly TIBSQL controls, rarely TIBQuery. Any filtering is done on the
server side and result datasets minimized as much as possible.
I've asked already about firebird performance on this forum and you were
surprised, why it's slow. I don't know if it might be bad application
design/db design/server configuration? The application isn't very
complicated, nor db structure is. Hope you can help - the number of
records will increase and number of clients double in a short time... If
you need more configuration details, just tell me.
Best Regards,
Marek Konitz