>
> There are other reasons that the index must be considered noisy. One is
that
> there is never a guarantee that uncommitted records are completely backed
out.
> A record is not committed by writing to disk, per se, but by first writing
the
> record then changing the state of the transaction on the TIP (transaction
> inventory
> page) from active (or limbo) to committed. Another reason is that all
numeric
> quantities, including dates, are mapping to double precision floating
numbers,
> then are mutilated so they can be compared as variable length sequences of
> unsigned bytes. There are most definitely boundary conditions where
several
> discrete values can map to the same index key value. The engine
compensates
> for these by always including endpoints when doing index range retrievals
then,
> of course, applying the exhaustive boolean to whatever dribbles out of the
> record
> stream.
>
> If it seems intuitive, you don't understand it, and need to think about it
> some more.
>
> Jim Starkey
Index in a multi-generational database seems like "Pandora's vase"...
I don't know if optimizing a particular situation ( are there other tests
under different CPU, OS ?) will break the "worst case scenario", giving a
boost in some situation and kill performance in other..., I think "If it
works well, don't touch it".
Probabilly , with all the respect to "Arno the optimizer", there are other
way to improve overall performance (use more than 10.000 pages without
performance penalty, for example ).