Subject Bi-directional indexes
Author Ann W. Harrison
Dmitry Yemanov and I exchanged a couple of messages about bi-directional
indexes a few weeks ago, and as a result, I'm reasonably sure that
Firebird could support bi-directional indexes, under one condition.
And, sadly, I think supporting bi-directional index walks is necessary.
I've read too many messages from users of other systems who can't
believe how "primitive" Firebird is in not supporting what is
"obviously" superior technology.

Walking an index page backward is a nuisance because the compression is
designed to work from front to back, but I think that's just a question
of compute cycles - annoying but faster than sorting a huge table over
and over in order to find the first 50, the next 50, and so on. Search
engines are a horrible model for databases. Maybe the right answer is a
special SQL dialect that says that you don't care about the absolute
accuracy ....

Anyway, here's the condition that allows backward walking of indexes: A
bi-directional index can't be self-compacting. Currently when two
adjacent pages in an index level are nearly empty, they are combined and
one of them is released.

Here's why that condition is a problem: Normal index walking is done
right to left, with handoffs. Index bucket splits are also managed left
to right, but with handoffs of write locks. Walking the index backward
with handoffs will cause deadlocks between the backward reader and a
thread trying to split a bucket. Internal deadlocks are not allowed.

So, rather than doing a handoff, the backward walker must release the
page it holds then get a read lock on the next page. In a high
contention multi-user environment, the time between releasing one lock
and getting another is fraught with peril. In this particular case, the
risk is that the page that the backward walker is about to read could
split, putting a new page between the one it just released and the one
it's about to read. The backward walker must be able to recognize and
manage that situation.

Consider a case... index pages 123, 124, 125. I'm reading backward
and have a read lock on 125. As long as I have that lock, nobody can
change that page. My left hand pointer says I read page 124 next. OK.
But, at the same time, another transaction tries to add an entry to page
124, finds the page full, and splits it, putting half the contents into
page 345. Now the correct order of the pages is 123, 124, 345, 125.
Half then entries that had been in 124 are now on page 345 and will be
missed if the backward walker doesn't read that page.

One way to handle that is for the backward walker to check when it reads
a page that the forward pointer on that page points to the page the
backward reader just left. If not, the backward reader starts reading
forward until it finds a page that does have a forward pointer to the
page it just left. In the instance above, when the backward reader
reads page 124, it discovers that the forward pointer now points to 345,
not 125, so it reads 345 and discovers that 345 points to 125, so that's
the right page and all is well with the world and the backward reader
reads the contents of page 345 (backward).

So why does compression alter all this? If while page 124 was
splitting, page 125 was compressed and released from the index, the
backward walker can walk forward all day and will never find its
starting point. Or, page 125 could have been released from one part of
the index, then reused somewhere else in the same index. That's all
very bad. If, however, a page once in the index will never be removed
from the index, nor change it's position relative to other pages in the
index, all is well. By changing relative position, I mean that page 124
will always follow 123 and precede 125. Other pages may appear in
between, but you will never find smaller key values in 124 than in 123.

Does this make any sense? Does it seem like a good idea?