Subject | Re: [firebird-support] Re: Performance Degrade - HELP |
---|---|
Author | Rajesh Punjabi |
Post date | 2003-10-13T11:05:11Z |
*Version of Firebird :*
[root@server root]# rpm -qi FirebirdSS
Name : FirebirdSS Relocations: /opt
Version : 1.0.2.908 Vendor: (none)
Release : 1 Build Date: Fri 24 Jan 2003
05:22:22 PM IST
Install date: Wed 12 Feb 2003 05:30:53 PM IST Build Host:
suse70.fleetriver
Group : Applications/Databases Source RPM:
FirebirdSS-1.0.2.908-1.src.rpm
Size : 9677487 License: InterBase Public
Licence (IPL)
Signature : (none)
URL : http://firebird.sourceforge.net
Summary : FirebirdSQL Database Server and Client tools.
Description :
Firebird is a powerful, high-performance relational database designed to
be embedded into
applications on multiple platforms.
*Files contained inside /opt/interbase/bin directory.*
[root@server interbase]# cd bin
[root@server bin]# ls -lh
total 3.8M
-r-xr-xr-x 1 root root 3.3K Jan 24 2003 changeDBAPassword.sh
-r-xr-xr-x 1 root root 257K Jan 24 2003 gbak
-r-xr-xr-x 1 root root 267K Jan 24 2003 gdef
-r-sr-s--- 1 root root 115K Jan 24 2003 gds_lock_mgr
-r-xr-xr-x 1 root root 121K Jan 24 2003 gds_lock_print
-r-xr-xr-x 1 root root 41K Jan 24 2003 gfix
-r-xr-xr-x 1 root root 545K Jan 24 2003 gpre
-r-xr-xr-x 1 root root 31K Jan 24 2003 gsec
-r-xr-xr-x 1 root root 24K Jan 24 2003 gsplit
-r-xr-xr-x 1 root root 31K Jan 24 2003 gstat
-r-x------ 1 root root 16K Jan 24 2003 ibguard
-r-xr-xr-x 1 root root 85 Feb 12 2003 ibmgr
-r-xr-xr-x 1 root root 28K Jan 24 2003 ibmgr.bin
-r-x------ 1 root root 1.7M Jan 24 2003 ibserver
-r-xr-xr-x 1 root root 3.0K Jan 24 2003 isc4.gbak
-r-xr-xr-x 1 root root 194K Jan 24 2003 isql
-r-xr-xr-x 1 root root 351K Jan 24 2003 qli
-r-xr-xr-x 1 root root 5.1K Jan 24 2003 SSchangeRunUser.sh
-r-xr-xr-x 1 root root 4.9K Jan 24 2003
SSrestoreRootRunUser.sh
[root@server bin]#
conclusively establish that I am using SS only, unless I have made a
mistake in any of the above ?
I have tried to check about cache buffers from client side; there is no
setting that has been changed in any of the client machines. In fact the
client machines use only gds32.dll. No parameters file is being used.
Even in my code I do not change the buffers.
using gfix I set the value to 10000 and will post the results over time.
I did not however get the switch to get the current value. Can anyone
tell me how to get the current value using gfix. What I used is :
[root@server bin]# ./gfix -user sysdba -password ****** -buffers 10000
/home/information/hr.gdb
I accept what everyone is saying that I do not need to change the
structure into multiple files with such a small db size.
On long running transactions ... the app has all unidirectional datasets
(*dbxpress only* no IBX or IBObjects) which do not allow me to put them
into transaction mode and commit / rollback later. Most of my code ( and
I am still examining it based on all your suggestions ) does not use any
long running transactions. Most of my code for transactions looks like this:
sql_stmt := 'Insert into .........'; // or whatever the statement is
other than select
with SQLConnection do
StartTransaction(Tran_ID);
try
executedirect(params....);
commit(Tran_ID);
except
rollback(Tran_ID);
end;
What this means is that between starting a transaction, executing a
statment and committing / rolling it back; there are hardly a few cpu
cycles used in the client machine. For selects I do not use transactions.
There are some cases where transactions take time after starting but
those are <5% of the total transactions and time taken. also such
transactions time out in 5 mins (the app forces the user to do these
kinds again if he cannot finish in 5 mins) Also such transactions are
used on only one machine right now for only 4-5 hours per day at the end
of the day. Anyway I will disable these and test and come back to you.
On the topic of index and poor selectivity I also have one more problem
not specifically related to the performance degrade above ... let me
explain my current problem there.
This is a recruitment database which means the central structure which
comprises 80% + of the database is the professionals and their resumes.
The list of Education Qualifications for example is a dynamic list built
at run time coz new types keep getting added as also specializations.
Similarly cities, states, countries are also added dynamically. At run
time during app startup the app queries all these fields and builds a
list dynamically in memory (client memory) and when the user wants to
enter / modify a value gives him a recommended list.
Now any and every query is run on a single table most of the times that
has details of candidates (There are some obvious masters like Present
organzation, designation, skills, Source, etc. but these do not get
queried as much as the central candidates table) Querying blobs
(resumes) is by far the biggest job on the server.
Similarly when a user gives a test at our end we document the results.
Results of one test for one client are sometimes used for another. We
therefore enter the same information repeatedly about the candidate
communication portion. When we send mails to candidates we again enter
some information which largely repeats.
With time these lists will always increase ... becomming bigger and
bigger. Are there any recommendations on what can be done in the long
run here ? What about the indexes in this respect ? The data tends to
repeat a lot and therefore the indexes may not give good selectivity.
Can I make an index to an index ? Everytime I drop and create indexes
they go poor in a few days. Maybe I am wrong in some of this ? Can
anyone give recommendations here ?
The tests of another app & FB causing a mem leak are still progressing.
I will inform you all about these results by tommorrow.
Regards,
RP
[root@server root]# rpm -qi FirebirdSS
Name : FirebirdSS Relocations: /opt
Version : 1.0.2.908 Vendor: (none)
Release : 1 Build Date: Fri 24 Jan 2003
05:22:22 PM IST
Install date: Wed 12 Feb 2003 05:30:53 PM IST Build Host:
suse70.fleetriver
Group : Applications/Databases Source RPM:
FirebirdSS-1.0.2.908-1.src.rpm
Size : 9677487 License: InterBase Public
Licence (IPL)
Signature : (none)
URL : http://firebird.sourceforge.net
Summary : FirebirdSQL Database Server and Client tools.
Description :
Firebird is a powerful, high-performance relational database designed to
be embedded into
applications on multiple platforms.
*Files contained inside /opt/interbase/bin directory.*
[root@server interbase]# cd bin
[root@server bin]# ls -lh
total 3.8M
-r-xr-xr-x 1 root root 3.3K Jan 24 2003 changeDBAPassword.sh
-r-xr-xr-x 1 root root 257K Jan 24 2003 gbak
-r-xr-xr-x 1 root root 267K Jan 24 2003 gdef
-r-sr-s--- 1 root root 115K Jan 24 2003 gds_lock_mgr
-r-xr-xr-x 1 root root 121K Jan 24 2003 gds_lock_print
-r-xr-xr-x 1 root root 41K Jan 24 2003 gfix
-r-xr-xr-x 1 root root 545K Jan 24 2003 gpre
-r-xr-xr-x 1 root root 31K Jan 24 2003 gsec
-r-xr-xr-x 1 root root 24K Jan 24 2003 gsplit
-r-xr-xr-x 1 root root 31K Jan 24 2003 gstat
-r-x------ 1 root root 16K Jan 24 2003 ibguard
-r-xr-xr-x 1 root root 85 Feb 12 2003 ibmgr
-r-xr-xr-x 1 root root 28K Jan 24 2003 ibmgr.bin
-r-x------ 1 root root 1.7M Jan 24 2003 ibserver
-r-xr-xr-x 1 root root 3.0K Jan 24 2003 isc4.gbak
-r-xr-xr-x 1 root root 194K Jan 24 2003 isql
-r-xr-xr-x 1 root root 351K Jan 24 2003 qli
-r-xr-xr-x 1 root root 5.1K Jan 24 2003 SSchangeRunUser.sh
-r-xr-xr-x 1 root root 4.9K Jan 24 2003
SSrestoreRootRunUser.sh
[root@server bin]#
> Hmm, if Helen's suspictions are right and you really use CS server,I have posted the results of the version check above. These should
>check if cashe buffers is'nt set inside database via gfix to large
>value suitable to SS, something about 10000. This setting overrides
>configuration parameter. And then check if it is not specified among
>client connection parameters, this override both conf and database
>settings.
>
>
conclusively establish that I am using SS only, unless I have made a
mistake in any of the above ?
I have tried to check about cache buffers from client side; there is no
setting that has been changed in any of the client machines. In fact the
client machines use only gds32.dll. No parameters file is being used.
Even in my code I do not change the buffers.
using gfix I set the value to 10000 and will post the results over time.
I did not however get the switch to get the current value. Can anyone
tell me how to get the current value using gfix. What I used is :
[root@server bin]# ./gfix -user sysdba -password ****** -buffers 10000
/home/information/hr.gdb
I accept what everyone is saying that I do not need to change the
structure into multiple files with such a small db size.
On long running transactions ... the app has all unidirectional datasets
(*dbxpress only* no IBX or IBObjects) which do not allow me to put them
into transaction mode and commit / rollback later. Most of my code ( and
I am still examining it based on all your suggestions ) does not use any
long running transactions. Most of my code for transactions looks like this:
sql_stmt := 'Insert into .........'; // or whatever the statement is
other than select
with SQLConnection do
StartTransaction(Tran_ID);
try
executedirect(params....);
commit(Tran_ID);
except
rollback(Tran_ID);
end;
What this means is that between starting a transaction, executing a
statment and committing / rolling it back; there are hardly a few cpu
cycles used in the client machine. For selects I do not use transactions.
There are some cases where transactions take time after starting but
those are <5% of the total transactions and time taken. also such
transactions time out in 5 mins (the app forces the user to do these
kinds again if he cannot finish in 5 mins) Also such transactions are
used on only one machine right now for only 4-5 hours per day at the end
of the day. Anyway I will disable these and test and come back to you.
On the topic of index and poor selectivity I also have one more problem
not specifically related to the performance degrade above ... let me
explain my current problem there.
This is a recruitment database which means the central structure which
comprises 80% + of the database is the professionals and their resumes.
The list of Education Qualifications for example is a dynamic list built
at run time coz new types keep getting added as also specializations.
Similarly cities, states, countries are also added dynamically. At run
time during app startup the app queries all these fields and builds a
list dynamically in memory (client memory) and when the user wants to
enter / modify a value gives him a recommended list.
Now any and every query is run on a single table most of the times that
has details of candidates (There are some obvious masters like Present
organzation, designation, skills, Source, etc. but these do not get
queried as much as the central candidates table) Querying blobs
(resumes) is by far the biggest job on the server.
Similarly when a user gives a test at our end we document the results.
Results of one test for one client are sometimes used for another. We
therefore enter the same information repeatedly about the candidate
communication portion. When we send mails to candidates we again enter
some information which largely repeats.
With time these lists will always increase ... becomming bigger and
bigger. Are there any recommendations on what can be done in the long
run here ? What about the indexes in this respect ? The data tends to
repeat a lot and therefore the indexes may not give good selectivity.
Can I make an index to an index ? Everytime I drop and create indexes
they go poor in a few days. Maybe I am wrong in some of this ? Can
anyone give recommendations here ?
The tests of another app & FB causing a mem leak are still progressing.
I will inform you all about these results by tommorrow.
Regards,
RP