Subject | Re: [firebird-support] FB performance |
---|---|
Author | Helen Borrie |
Post date | 2006-01-05T23:16:07Z |
At 10:48 PM 5/01/2006 +0100, you wrote:
Have you looked at the plans generated?
transactions are all NO WAIT and your application is handling
conflicts. Check the transaction attributes, though, if you have not
already done so.
IBX, if you use the RecordCount property, you will be using select count(*)....
for AutoCommit is to use CommitRetaining. This is handy for Delphi client
apps and disastrous for the server. It causes garbage to build up
steadily. You are seeing better performance on a clean database and
degrading performance as garbage builds up and doesn't get cleared. Check
the database header statistics over this period of degradation (gstat
-h). If you see a widening gap between the Oldest Transaction and the
Oldest Active then you'll know that poor database hygiene is a significant
part of your problem.
settings firebird.conf to increase the default and maximum RAM allocation
for this. Remember that, in Classic, each client gets its own allocation.
Another RAM allocation that Classic clients get individually is page
buffers. This is a block of RAM that is allocated for caching copies of
index and data pages as they are read. The server reads this cache first
when seeking a record, to reduce the amount of disk reads it needs to do to
find stuff. Check how much RAM each client is getting (multiply page size
by the figure "Page Buffers" in the gstat -h readout.) You can change it
in various places, including DefaultDbCachePages in firebird.conf.
to a higher prime number and the LockMemSize to a higher number than the
default 256K. There's an article about lock management by Ann Harrison in
the IBDeveloper mag, issue 2 (download from http://www.ibdeveloper.com/)
that gives useful guidance on this. (In Classic, the LockMemSize is
supposed to increase dynamically as needed; the documentation in
firebird.conf isn't altogether accurate for all models and platforms...)
notes. However, specific "horses for courses" values are a question of
trial and error and reading useful articles, especially Ann's
articles. (Ann has an "expert" series of articles published in the
Research|Research Center area at www.ibphoenix.com.)
model is best for any particular environment. Because of the threading
issues with Superserver, Classic is likely to be better for CPU-intensive
operations on SMP machines.
useful responses. This question is better, since you actually provide some
information that can be commented on.
client (TIBQuery, TIBTable) in a read-write transaction (the default) will
block garbage collection. You might like to look at how you are displaying
sets to users....if you are using explicit DML for your operations anyway,
in preference to dataset methods (Insert, Edit, Delete) it would be worth
considering running your SELECTs in a read-only transaction.
Make a POINT of performing explicit COMMITs and ROLLBACKs on your DML
operations and abandon AutoCommit. It was a hack invented by Borland to
make life easy for Paradox programmers moving to InterBase - it allowed
them to pretend that transaction control was irrelevant. For serious
client/server development, it's still a hack.
./heLen
>Hi,Are you sure you are not indexing any columns with low selectivity?
>
>I've database with about 10 000 000 records in main table, max 30
>concurrent connections. Every client updates one record in that table
>every 1-2 minutes and from time to time selects a set of data with ca 30
>000 records. It run extremely slow...
>
>The server is: p4 xeon 1,5GB RAM, Firebird 1.5.2, win 2003, database
>page size 8192
>When fb was installed as superserver it used about 70MB of memory, now
>it's classic server and each connection consumes ca 5MB. Classic goes a
>bit faster than super, but still cpu usage is about 90-100% all the time.
>
>- I've already reviewed indices
Have you looked at the plans generated?
>- I don't think that users could lock each other - every client works onPresumably you would know if locking was the slowdown, if the user
>separate set of records
transactions are all NO WAIT and your application is handling
conflicts. Check the transaction attributes, though, if you have not
already done so.
>- I know what cpu affinity is - was set to 2 (second processor) withAre you trying to use select count(*) in applications? Note that, with
>superserver
>- select count(*) from table takes several minutes(!) /tested both in
>ibexpert and application/
IBX, if you use the RecordCount property, you will be using select count(*)....
>- client application uses ibx controlsYou mean ibx components, right? ibx doesn't have any controls.
>- after backup/restore works slightly better, but only for several hoursThen look closely at how transactions are being committed. The IBX default
for AutoCommit is to use CommitRetaining. This is handy for Delphi client
apps and disastrous for the server. It causes garbage to build up
steadily. You are seeing better performance on a clean database and
degrading performance as garbage builds up and doesn't get cleared. Check
the database header statistics over this period of degradation (gstat
-h). If you see a widening gap between the Oldest Transaction and the
Oldest Active then you'll know that poor database hygiene is a significant
part of your problem.
>Questions:In Fb 1.5, sorts will be done in memory if memory is available. There are
>- How can I increase amount of memory used by fb server?
settings firebird.conf to increase the default and maximum RAM allocation
for this. Remember that, in Classic, each client gets its own allocation.
Another RAM allocation that Classic clients get individually is page
buffers. This is a block of RAM that is allocated for caching copies of
index and data pages as they are read. The server reads this cache first
when seeking a record, to reduce the amount of disk reads it needs to do to
find stuff. Check how much RAM each client is getting (multiply page size
by the figure "Page Buffers" in the gstat -h readout.) You can change it
in various places, including DefaultDbCachePages in firebird.conf.
>- What parameters should I change in firebird.conf?There may be some benefit from changing the lock table size (LockHashSlots)
to a higher prime number and the LockMemSize to a higher number than the
default 256K. There's an article about lock management by Ann Harrison in
the IBDeveloper mag, issue 2 (download from http://www.ibdeveloper.com/)
that gives useful guidance on this. (In Classic, the LockMemSize is
supposed to increase dynamically as needed; the documentation in
firebird.conf isn't altogether accurate for all models and platforms...)
>- Are there any 'hidden' (not mentioned in comments) parameters inNo.
>firebird.conf?
>Any documentation about this file?In the file itself; and more fully documented in the release
notes. However, specific "horses for courses" values are a question of
trial and error and reading useful articles, especially Ann's
articles. (Ann has an "expert" series of articles published in the
Research|Research Center area at www.ibphoenix.com.)
>- Superserver is said to be more efficient in such implementations, so"Horses for courses" again. There is no hard and fast rule about which
>why the classic is?
model is best for any particular environment. Because of the threading
issues with Superserver, Classic is likely to be better for CPU-intensive
operations on SMP machines.
>- Have ibx controls negative influence on database server? I'm usingThe CommitRetaining issue is the killer one.
>mostly TIBSQL controls, rarely TIBQuery. Any filtering is done on the
>server side and result datasets minimized as much as possible.
>I've asked already about firebird performance on this forum and you wereVague questions like "How can I improve performance?" don't usually get
>surprised, why it's slow. I don't know if it might be bad application
>design/db design/server configuration?
useful responses. This question is better, since you actually provide some
information that can be commented on.
>The application isn't very complicated, nor db structure is. Hope you canI don't know whether you are aware that any dataset you are holding on a
>help - the number of records will increase and number of clients double in
>a short time... If you need more configuration details, just tell me.
client (TIBQuery, TIBTable) in a read-write transaction (the default) will
block garbage collection. You might like to look at how you are displaying
sets to users....if you are using explicit DML for your operations anyway,
in preference to dataset methods (Insert, Edit, Delete) it would be worth
considering running your SELECTs in a read-only transaction.
Make a POINT of performing explicit COMMITs and ROLLBACKs on your DML
operations and abandon AutoCommit. It was a hack invented by Borland to
make life easy for Paradox programmers moving to InterBase - it allowed
them to pretend that transaction control was irrelevant. For serious
client/server development, it's still a hack.
./heLen