Subject Re: [Firebird-Architect] Re: Indexing tales - part 1 - Siblings
Author = m. Th =
Ann W. Harrison wrote:
> m_theologos wrote:
> >>> The problem with reverse navigation in Firebird indexes
> >>> currently
> >> is not their structure, but with the necessity of avoiding
> >> internal deadlocks while permitting the recombination of nearly
> >> empty nodes.
> >>
> >
> > Thank you very much! I looked a little to the index structure I
> > didn't see any impediment, as you confirmed now. But perhaps, then,
> > can you do a simple lock manager in order to avoid this?
> We do use the lock manager to handle locks (aka latches) on index
> pages. The problem is that a reader reading backward will cause a
> deadlock against a writer who's combining two pages. That's an
> internal deadlock and we don't do internal deadlocks.

I meant something like this:

We have:
Scan_fn = a function (or group of functions, doesn't matter) which
scans (backward) the tree.
Change_fn = a function (or group of functions) which changes the
tree's state/structure and thus are "dangerous" for the Scan_fn

ScanLocks = integer (default value: 0) //tree global variable which
holds the number of scan_fn active in a moment in time on the given tree
ChangeLocks = integer (default value: 0) //tree global variable
which holds the number of change_fn active in a moment in time on the
given tree

The simplest implementation:

at the beginning of Scan_fn do:
until ChangeLocks=0; //wait until no change fn works on tree
< our job...>

at the beginning of Change_fn do
until ScanLocks=0;
< our job...>

But this will lock the entire tree for scan_fn or for Change_fn if a
function from the other group is holding it.
IMHO, an improvement can be achieved like this:

We can define more variables:
ScanBeginKeyLocked - //holds the lower limit of the scan (these are
_not_ the real limits but the page limits)
ScanEndKeyLocked - //holds the upper limit of the scan (these are _not_
the real limits but the page limits - our main goal is not to touch
these pages)

ChangeBeginKeyLocked - //holds the first key on the first sensible page
(ie. the first page which will be changed)
ChangeEndKeyLocked - //holds the last key on the last sensible page (ie.
the last page which will be changed)

The code should change in order to put the wait cycle only if the
function tries to 'enter' in the locked zone of the adverse fn. Since we
are multi-threaded we must have a table of these intervals (which in the
case of ScanBegin/EndKeyLocked can be optimized, computing the maximum
of these intervals). Scan_fn at the beginning will lock the entire tree,
but as it advances, it will unlock the keys one by one. Usually the
Change_fn will lock very few keys, thus scan_fn could begin its scan and
waiting for the change_fn to do its job which usually it will be very
fast. Note that the Scan_fn can put a lock after the Change_fn but not
the inverse. Ie.:

We have the tree 1-2-3-4-6-7-8-9-10-11-12-13-14-15

We want to insert the key 5, considering for simplicity that each page
holds only 3 keys, we have the pages 1-2-3, 4-6-7, 8-9-10, 11-12-13,
14-15. So, Change_fn (which in this case will be the insert_node) will
put ChangeBeginKeyLocked=4 (the begin of the lowest sensible page)
ChangeEndKeyLocked=10 (the end of the last sensible page). In the
meanwhile a reverse scan function, the Scan_fn, will try to scan the
tree it puts the lock on the entire tree (ie. ScanBeginKeyLocked=1,
ScanEndKeyLocked=15) and begins the scan. While scanning the scan_fn
will decrement the ScanEndKeyLocked testing continuously if the next key
to scan will enter in the 'forbbiden' zone (ie.
crtScannedNode=ChangeEndKeyLocked=10). If the change_fn does its job it
will clear its EndKey/BeginKey pairs and decrement the ChangeLocks. When
the next Change_fn will try to enter let say inserting the key 3.14 ,
will see that in that area the Scan_fn has the control and it will wait.
So for each group, scan_fn and change_fn the tree will be 'read-only'.

IMHO, this should do the job. Any comments and improvements (which I'm
sure that can be done) are always welcome.

> A transaction gets a deadlock error if and only if it has a record
> level conflict on an update or delete with a concurrent transaction
> updating or deleting the same record. (Except, of course, for
> no_rec_version, no_wait, read_committed - a combination I have
> already deplored.)
> >
> > Anyway, what do you think, implementing the siblings won't do the
> > navigation to go faster? You know, at 65,535 items we already have
> > a tree with 16 levels... (...but is true that at 1,048,576 items we
> > have only 20 levels) which means in worst case (depending of
> > implementation) 16/32 comparisions and jumps to go to next node.
> No, I think our current indexes which generally keep to three levels
> are better. I don't see how you would handle key compression or
> balance the indexes you proposed. Have you read these?
> <>
> Regards,
> Ann

So, you have the 'siblings' feature already inside (because I presume
that you can navigate between leaf nodes).
So, 'back to square 1' as it says,
1. can you optimize the 'CONTAINING' predicate using a full scan index
(given that the index has a sensible lower selectivity than 1)
2. perhaps is easy to do a letter indexing (see my previous posts) to
achieve a truly Free Text Search, not keyword constrained. (And if you
are interested, and have development resources I'll show you some
improvements to this...)

Thank you very much for the links!

Pros: Very good material. Thanks indeed.
Cons: Hard to find.


[Non-text portions of this message have been removed]