Subject | Re: i need your opinion... |
---|---|
Author | Adam |
Post date | 2005-10-29T00:19:46Z |
--- In firebird-support@yahoogroups.com, Alexandre Benson Smith
<iblist@t...> wrote:
This is a much better description of what you are trying to achieve.
Perhaps you could clarify whether the "heavy but none-time-critical
queries" are inserts / updates / deletes etc, or whether it is just
someone who is running reports on the data?
If it is simply some reports that are causing the bottleneck (which I
suspect it is), then the easiest way to do this is to run gbak and
backup / restore the database to a different server. From my
experience, many reports are not time critical, and the previous
nights backup is "up to date enough" to query what they are after.
To answer your direct question, I suppose you could write a UDF that
sleeps the thread for a set time rather than waste clock cycles which
may be important in sorting etc.
BUT
As Set hinted at, this approach is quite dangerous. If the "heavy but
none-time-critical queries" are inserts / updates / deletes, then you
are going to have a delta on some records for longer than necessary.
That means other processes that may be time-critical will have to wait
for your delayed "none-time-critical queries" to commit or roll back,
and this delay will be much worse than the I/O bottle neck you are
trying to avoid.
If however the "heavy but none-time-critical queries" are purely
selects, then you still have an issue because the transaction will
keep old record versions "interesting". This will prevent garbage
collection, keep old records inside any index and depending on how old
it is give the gc a real run for its money when it finally does commit.
Using CS, you could set the process priority, however I think this is
mainly a CPU thing rather than an I/O thing, and these length queries
may well advantage from the SS shared cache.
On the other hand, perhaps what you are trying to do is to "encourage"
users to run these reports overnight by making them take extra long.
If this is the case (not that it is an approach I would take), insert
the delay on the client end and commit the transaction as soon as
possible so that the garbage and delta issues are not an issue.
Adam
<iblist@t...> wrote:
>Ali,
> Ali Gökçen wrote:
>
> >You know, in a multiuser environment there is alot of work types.
> >some of them more important and need fast response, fast insert, fast
> >update(urgent tasks).
> >if there is alot of user with running heavy but none-time-critical
> >queries, they can create a DISC I/O bottleneck on system.
> >my goal is to detain this user with CPU operations and to deffer their
> >DISK I/O operations.
> >
> >Regards.
> >Ali
This is a much better description of what you are trying to achieve.
Perhaps you could clarify whether the "heavy but none-time-critical
queries" are inserts / updates / deletes etc, or whether it is just
someone who is running reports on the data?
If it is simply some reports that are causing the bottleneck (which I
suspect it is), then the easiest way to do this is to run gbak and
backup / restore the database to a different server. From my
experience, many reports are not time critical, and the previous
nights backup is "up to date enough" to query what they are after.
To answer your direct question, I suppose you could write a UDF that
sleeps the thread for a set time rather than waste clock cycles which
may be important in sorting etc.
BUT
As Set hinted at, this approach is quite dangerous. If the "heavy but
none-time-critical queries" are inserts / updates / deletes, then you
are going to have a delta on some records for longer than necessary.
That means other processes that may be time-critical will have to wait
for your delayed "none-time-critical queries" to commit or roll back,
and this delay will be much worse than the I/O bottle neck you are
trying to avoid.
If however the "heavy but none-time-critical queries" are purely
selects, then you still have an issue because the transaction will
keep old record versions "interesting". This will prevent garbage
collection, keep old records inside any index and depending on how old
it is give the gc a real run for its money when it finally does commit.
Using CS, you could set the process priority, however I think this is
mainly a CPU thing rather than an I/O thing, and these length queries
may well advantage from the SS shared cache.
On the other hand, perhaps what you are trying to do is to "encourage"
users to run these reports overnight by making them take extra long.
If this is the case (not that it is an approach I would take), insert
the delay on the client end and commit the transaction as soon as
possible so that the garbage and delta issues are not an issue.
Adam