Subject RE: [IB-Architect] Fw: Slow deletion on large tables
Author Claudio Valderrama C.
> -----Original Message-----
> From: Charlie Caro [mailto:ccaro@...]
> Sent: Martes 30 de Mayo de 2000 15:18
> I hope someone has had an opportunity to try this scenario [slow
> deletion] with
> a V6 Superserver configuration. If the performance is still this
> bad then I
> have't done a very good job in this area.

Since the original posting read:
"I'm using embedded SQL in C++ on a NT machine"
then I *assume* that person is using V6 SuperServer.

> Since the post appeared in a Kinobi newsgroup, I'm assuming the
> slow deletion
> was run against V6? My question: was it Classic or Superserver?

Apart from what I've said above, I'm only reading the problems. Brett has
tried to jump here. Can you take a peek at kinobi.performance, Charlie?

The issue is a bit daunting. Maybe you have an idea even if current
limitation exist after reading the further explanations.

> It's a consequence of the architecture which was designed before the
> implementation of referential integrity. The initial architecture
> certainly
> didn't foresee the magnitude of the length of duplicate index
> chains today, and
> even if it did, it probably assumed user error or bad application design.

Umh, for what I've read in this list more than a month ago, I had assumed
these problems weren't real. AFAICT, I understood that Jim said there's no
evidence of performance problems in connection with indexes holding a great
amount of duplicate entries. May have misread the explanation, tho.

> Assuming that all users can't be educated or for some reason
> avoid those large
> numbers of duplicates, there has been some research and thinking
> on the issue.

Well, if you want to start, please educate me. :-) Let's assume I have 40
thousand people and I keep the region as a foreign key. (Region is my
country is like a state in the US.) There are only 13 regions here, so the
selectivity is rather poor, but the index will be created as part of the FK
relationship. What's the solution? Don't define the FK but perform the
lookup in the BEFORE INSERT trigger with a select/exists construction?

> Primarily, it involves treating the 'record number' field of an
> index node as a
> minor key pseudo-extension of the real index key.

This record number... is it exclusive of the index node or is taken from
the table's records?

> Thus duplicate chains would be collated in the index tree by
> record number. Once
> extended to non-leaf levels of the index, it becomes trivial to
> random access
> and purge a duplicate index node by record number.
> Regards,
> Charlie

Does this mean that the index tree is more balanced with this addition or
does it depth change?
One of the users with this problem said he would try Ann's suggestion. The
other said this problem doesn't apply to his case:
Thank you for the replies,

I've only the one primary index on the table I'm testing, so there are no
duplicate entries in the index.

I tried replacing the Blob with a CHAR(900) and this didn't help. The
takes about 3 times as long as the insertion of the records. Althougth this
time was constant, where as with the Blobs the time increases with
test runs.

I also tried removing the only index in the table. The first time the
records were deleted quickly, the second time the delete took a very long
I'm now even more confused as to what is going on!

I won't continue piggybacking the kinobi.performance NG here unless ISC
people want to continue here in parallel. The problem is Brett seems to have
ran out of swapping space <grin> because he has limited time to answer these
issues for now. As I don't have decent suggestions and there are other
pending (unanswered) questions in the same NG, I read and fade in silence
while praying for the next generations.