Subject Re: [IB-Architect] Fw: Slow deletion on large tables
Author Charlie Caro
I hope someone has had an opportunity to try this scenario [slow deletion] with
a V6 Superserver configuration. If the performance is still this bad then I
have't done a very good job in this area.

Since the post appeared in a Kinobi newsgroup, I'm assuming the slow deletion
was run against V6? My question: was it Classic or Superserver?

It's a consequence of the architecture which was designed before the
implementation of referential integrity. The initial architecture certainly
didn't foresee the magnitude of the length of duplicate index chains today, and
even if it did, it probably assumed user error or bad application design.

Assuming that all users can't be educated or for some reason avoid those large
numbers of duplicates, there has been some research and thinking on the issue.
Primarily, it involves treating the 'record number' field of an index node as a
minor key pseudo-extension of the real index key.

Thus duplicate chains would be collated in the index tree by record number. Once
extended to non-leaf levels of the index, it becomes trivial to random access
and purge a duplicate index node by record number.

Regards,
Charlie

"Claudio Valderrama C." wrote:
>
> Hello, I need comments on that, assuming it's worth the time, of course.
> Is this scenario a natural consequence of the architecture, a bug in the
> engine or a misuse of the developers?
> It appeared in kinobi.performance and I was curious about these results.
>
> C.
>
> Michael Stratmann <Stratmann@...> wrote in message
> <392EC554.7EAEDDBC@...>...
> > Hello!
> >
> > Jon schrieb:
> > >
> > > Hello,
> > >
> > > I'm having performance problems deleting records from a large table.
> > > Each record is small (6 integers) and contains a blob which is one 1k
> > > segment.
> > >
> > > In my prototype I add 20,000 records and then delete them again. The
> > > first time through, the commit for the delete finishes in a couple of
> > > seconds. The second time through, the commit takes up to a minute.
> >
> > We (Tilo Arnold and me) have tried it with 200.000 records and 10 kB
> > BLOBs. With the performance hints (queries instead of TTable, asc and
> > desc indices, set statistics) already implemented. :-/
> >
> > Delays are going ab to 30 Minutes. An we would like to store millions of
> > records. Now we are diskussing alternatives to BLOBs. Personally I
> > believe Interbase BLOBs are not a really good idea, if you want to
> > delete the BLOBs later.
> >