Subject | Re: [IB-Architect] 'Order By' Issue |
---|---|
Author | Jim Starkey |
Post date | 2000-12-05T20:28:35Z |
At 11:08 PM 12/5/00 +0300, Dmitry Yemanov wrote:
through selected records and sort the results than to bounce
between the index and the data pages -- a quicksort, even with
a merge, is faster than a page read. The only time that ordering
by the index wins is a) the metric is time to first record, or
b) you aren't planning to look at any but the first few records.
There is no question that less memory could be used to using even
a crud compress/decompress when building the sort record. But
given the vast amounts of memory available on modern machines, I
don't think it's going to make any difference at all.
The allocation of space in the sort temp space is dumb -- always
has been. The question is trading off space efficiency vs.
rattling the disk arm. With disk space going for a couple
of bucks per gig, who really cares?
For all things related to perform, remember that the goal is run
out of memory, saturate the CPU, and max out the disk all at the
same time. It's getting real hard to tie up a CPU these days,
and Firebird was designed to run happily with a couple of dozen
buffers. So the limiting factor is almost always the disk. Any
memory or CPU tricks that can avoid stopping the electonics to
wait for a mechanical kludge to bounce around are worth taking.
Anything that just causes the CPU to spend more time in the idle
loop waiting for a disk interrupt is just slowing things down.
If you really want to make things scream, figure out how to do
things is parallel. Which means fine granularity threading,
which means ... you know the drill.
Objects are closer than they appear.
Jim Starkey
>[snip]
>A few fresh thoughts.
>
>It is generally (i.e. always) faster to make a sequential pass
>Anyway, what should the engine do to sort records now? Read them from their
>data pages (sequentially or via indices), write them to the buffer, sort
>the buffer, read it and finally transfer to the client side.
through selected records and sort the results than to bounce
between the index and the data pages -- a quicksort, even with
a merge, is faster than a page read. The only time that ordering
by the index wins is a) the metric is time to first record, or
b) you aren't planning to look at any but the first few records.
There is no question that less memory could be used to using even
a crud compress/decompress when building the sort record. But
given the vast amounts of memory available on modern machines, I
don't think it's going to make any difference at all.
The allocation of space in the sort temp space is dumb -- always
has been. The question is trading off space efficiency vs.
rattling the disk arm. With disk space going for a couple
of bucks per gig, who really cares?
For all things related to perform, remember that the goal is run
out of memory, saturate the CPU, and max out the disk all at the
same time. It's getting real hard to tie up a CPU these days,
and Firebird was designed to run happily with a couple of dozen
buffers. So the limiting factor is almost always the disk. Any
memory or CPU tricks that can avoid stopping the electonics to
wait for a mechanical kludge to bounce around are worth taking.
Anything that just causes the CPU to spend more time in the idle
loop waiting for a disk interrupt is just slowing things down.
If you really want to make things scream, figure out how to do
things is parallel. Which means fine granularity threading,
which means ... you know the drill.
Objects are closer than they appear.
Jim Starkey