Subject | Re: [firebird-support] Preventing a query from using all CPU time |
---|---|
Author | Helen Borrie |
Post date | 2005-02-08T11:25:11Z |
At 10:22 AM 8/02/2005 +0000, you wrote:
ApplicationProcessMessages does in Delphi". Look for the bottlenecks in
the report process, such as:
-- is the report running in a Read Committed transaction? (should be in a
Concurrency transaction, for integrity of the reports, as well as reducing
the overhead and memory usage.)
-- is the report running in a transaction that is doing other things as
well? It should not. You need to isolate a big report in its own
transaction.
-- is the report's transaction a Read/Write one? Make it Read Only, so
that reading the data for the report doesn't cause a big build-up of
garbage from the other users.
-- If you are seeing CPU usage running at 100% while this report is
running, you can count on it not being the report's query that is doing it
-- unless of course it is a particularly horrendous query. It's most
likely that GC is trying to take advantage of relatively idle time to try
to do some clean-up. If you are running Classic, it could well be that the
report's query is copping everyone's clean-up.
As a reality check, some time when there are not interactive users around,
you could try the following:
1. Run a sweep.
2. Run the report and monitor the CPU statistics.
-- do the db header (gstat -h) stats show a large gap between Oldest
Transaction and the Oldest Active transaction? Does it often approach the
sweep interval figure? It could be that you are seeing sweeps happening
because of these report runs going on with unfavourable transaction
settings. If this is the case, it might be worthwhile trying out disabling
auto-sweeps and scheduling manual sweeps at the lunch-breaks.
A frequently occurring large gap on such a small system could be the sign
of application code that is not managing transactions well. Some strategic
refactoring of the interactive applications could help a lot.
If you you have a server with SMP and are *not* running Classic, you could
try it out. The server should try then to balance the load: it's quite
likely the report will run on the other processor if the interactive users
are already flat-out when the report's transaction starts.
./hb
>I have a customer who needs to be able to print an 80 page report atTransactions always run concurrently, so it's not a question of "doing what
>various times during the day. Understandably it takes several minutes
>to run the SQL query to get the data for this report. The problem
>comes in for the other 5 users in the system that are busy capturing
>data everything slows down drastically for them. Even if this
>report is optimised down to less than a couple of minutes, it is
>still unacceptable for the other users.
>Is there any way of preventing this report from hogging all the
>resources, so that other smaller queries can run in between? (A bit
>like what Application.ProcessMessages does for you within a loop in
>Delphi)
ApplicationProcessMessages does in Delphi". Look for the bottlenecks in
the report process, such as:
-- is the report running in a Read Committed transaction? (should be in a
Concurrency transaction, for integrity of the reports, as well as reducing
the overhead and memory usage.)
-- is the report running in a transaction that is doing other things as
well? It should not. You need to isolate a big report in its own
transaction.
-- is the report's transaction a Read/Write one? Make it Read Only, so
that reading the data for the report doesn't cause a big build-up of
garbage from the other users.
-- If you are seeing CPU usage running at 100% while this report is
running, you can count on it not being the report's query that is doing it
-- unless of course it is a particularly horrendous query. It's most
likely that GC is trying to take advantage of relatively idle time to try
to do some clean-up. If you are running Classic, it could well be that the
report's query is copping everyone's clean-up.
As a reality check, some time when there are not interactive users around,
you could try the following:
1. Run a sweep.
2. Run the report and monitor the CPU statistics.
-- do the db header (gstat -h) stats show a large gap between Oldest
Transaction and the Oldest Active transaction? Does it often approach the
sweep interval figure? It could be that you are seeing sweeps happening
because of these report runs going on with unfavourable transaction
settings. If this is the case, it might be worthwhile trying out disabling
auto-sweeps and scheduling manual sweeps at the lunch-breaks.
A frequently occurring large gap on such a small system could be the sign
of application code that is not managing transactions well. Some strategic
refactoring of the interactive applications could help a lot.
If you you have a server with SMP and are *not* running Classic, you could
try it out. The server should try then to balance the load: it's quite
likely the report will run on the other processor if the interactive users
are already flat-out when the report's transaction starts.
./hb