Subject RE: Preventing a query from using all CPU time
Author chriskudla
Hi Helen,

Many thanks for all your suggestions - and to think I thought that we
were nearing the end of the road of possibilities!

It's taken a while to try everything, but finally below are our
resulting confusions/questions:

ORIGINAL PROBLEM
>I have a customer who needs to be able to print an 80 page report at
>various times during the day. Understandably it takes several
minutes
>to run the SQL query to get the data for this report. The problem
comes
>in for the other 5 users in the system that are busy capturing data
¬
>everything slows down drastically for them. Even if this report is
>optimised down to less than a couple of minutes, it is still
>unacceptable for the other users.
>Is there any way of preventing this report from hogging all the
>resources, so that other smaller queries can run in between?



HB> -- is the report running in a Read Committed transaction?

Would that be good or bad? Was running in a read-only transaction
(parameters are simply set to "read") Added in "read_committed" as
well, but then started to get locking errors when simultaneously
editing a record that was getting queried.

HB> (should be in a
Concurrency transaction, for integrity of the reports, as well as
reducing the overhead and memory usage.)

I have read this section in your book, and in the Interbase 6 docs,
and some other web links, and now I am more confused than ever.
I *think* I know what I want, but none of the docs explain what
actual parameters the setting correspond to. (I am using IBX
controls, not sure if the syntax would be different at all?) Any idea
what the syntax is for specifying a concurrency transaction? And will
that produce locking issues?

NB. The read/writes transactions' params are set to:
read_committed
rec_version
nowait


HB> -- is the report's transaction a Read/Write one? Make it Read
Only,
HB> so
that reading the data for the report doesn't cause a big build-up of
garbage from the other users.

See above - set to "READ"

Most of the transactions started, that did not modify anything, were
getting rolled back, instead of committed. Am busy changing them all
to Commits.
The tests we run though, are usually on freshly restored data, so
that may be an additional factor at the clients site, but not on our
tests here.

HB> If you have a server with SMP and are *not* running Classic, you
could try it out. The server should try then to balance the load:
it's quite likely the report will run on the other processor if the
interactive users are already flat-out when the report's transaction
starts.

We are using SuperServer.
Tried Classic here on our single processor 800MHz PIII (win2K).
Report now uses almost 100% CPU (takes a bit longer to run) but the
*FABULOUS* thing is that now additional users doing the data capture
type stuff, don't even realize that the report is running! (We
all
did a happy dance around the office - as an accounting bunch, that's
quite out of character!)

But now we're confused (AGAIN!)
SuperServer is *SUPPOSED* to be faster on "newer" hardware, and is
supposed to be the improvement on Classic. And Classic is better on
multiprocessors (new technology!) So what kind of hardware *IS* it
going to be quicker to run SuperServer on? How do we know when to
start using it again?

2nd Issue with Classic
The services manager has not been fully implemented. I can't do a
backup from a client machine. This is a *BIG* problem. (I'm hoping
I'm doing something wrong and that it IS possible!)

Thanks so much for everyone's help so far.


Chris Kudla