Subject | Re: [IB-Architect] Indexes, Multi-Generations, and Everything |
---|---|
Author | Charlie Caro |
Post date | 2000-06-29T15:41:30Z |
Jim Starkey wrote:
feel compelled to correct it. Since Jim wrote the actual code, it's
no disrespect to him. I'm just refreshing his memory here.
If the engine had to traverse the record back version chain every
time it modified a record, performance would come to a crawl. The
index manager only compares the new record against the current record
version when deciding to add new index entries.
This can lead to some pathological cases, all of which the engine
handles to present a logically correct view of the data. There's no
doubt in my mind that the current behavior is the best design choice
for performance considerations alone.
However, some of the existing garbage collection algorithms don't
leverage that the only functional dependence governing new index
and blob entries is between a record and the prior record version.
I'm referring to a transaction rolling itself back or undoing a
statement savepoint. These operations are enormously expensive in
the presence of long version chains due to high contention, multi-user
environments and will benefit from a fresh examination.
Regards,
Charlie
>I've read this explanation by several sources over the years and
>
> When a record is inserted, the engine will make an index entry
> for each index defined on the table. When a record is modified,
> however, the engine will search the entire chain of back versions
> looking for a record with an duplicate value. If (and only if)
> it doesn't find a duplicate, it adds an index entry for the index.
>
feel compelled to correct it. Since Jim wrote the actual code, it's
no disrespect to him. I'm just refreshing his memory here.
If the engine had to traverse the record back version chain every
time it modified a record, performance would come to a crawl. The
index manager only compares the new record against the current record
version when deciding to add new index entries.
This can lead to some pathological cases, all of which the engine
handles to present a logically correct view of the data. There's no
doubt in my mind that the current behavior is the best design choice
for performance considerations alone.
However, some of the existing garbage collection algorithms don't
leverage that the only functional dependence governing new index
and blob entries is between a record and the prior record version.
I'm referring to a transaction rolling itself back or undoing a
statement savepoint. These operations are enormously expensive in
the presence of long version chains due to high contention, multi-user
environments and will benefit from a fresh examination.
Regards,
Charlie