Subject | Re: [firebird-support] Slow response on a big database? |
---|---|
Author | Indysoft Bt. |
Post date | 2009-02-16T16:42:27Z |
Hi!
Based on my experiences the most possible reason is the implicit
transaction handling of the data access layers. The COMMIT_RETAINING
with long lasting transactions kills the server. I use explicit TR
handling and everything is OK. With so many clients/connections, the
server is running very fast with bigger databases. OK, it is not too
much records, but I have a table with GEOIP information (IP address
ranges mapped to their respective geographical location, with
longitude/latitude coordinates, ISP, COUNTRY, CITY information from the
whole world). It contains 4.500.000 records and runs very fast with many
clients connected. I also made a load test with a database, there is a
table contains 50.000.000 records with INTEGER, VARCHAR,DATE, etc
fields. It is also very fast with this amount of data, with so many
client requests. Explicit TR handling is the key, in my opinion.
Regards:Alex :-)
Based on my experiences the most possible reason is the implicit
transaction handling of the data access layers. The COMMIT_RETAINING
with long lasting transactions kills the server. I use explicit TR
handling and everything is OK. With so many clients/connections, the
server is running very fast with bigger databases. OK, it is not too
much records, but I have a table with GEOIP information (IP address
ranges mapped to their respective geographical location, with
longitude/latitude coordinates, ISP, COUNTRY, CITY information from the
whole world). It contains 4.500.000 records and runs very fast with many
clients connected. I also made a load test with a database, there is a
table contains 50.000.000 records with INTEGER, VARCHAR,DATE, etc
fields. It is also very fast with this amount of data, with so many
client requests. Explicit TR handling is the key, in my opinion.
Regards:Alex :-)