Subject Re: Delete and performance afterwards
Ann, Svein,

Thanks for the very detailed answers!

> You're seeing the effect of garbage collection. Try gbak with the
> -g switch (to inhibit garbage collection). It will be the same
> speed in both cases.

That was (of course) the problem about it.

> <minor rant>
> Why do people test databases by storing and deleting large
> amounts of artificial data? It seems so unlikely in any
> real situation
> </minor rant>

My test database was coming from a "real" situation. I was just
reducing it to this test situation to be sure I found the problem. My
real situation is: I'm doing a data transfer which is using a general
import interface. This import interface produces a protocoll based on
standard messages (perhaps 20 different messages). For this initial
data transfer there are a lot of "stupid" messages like "new record
inserted" etc. which I just delete afterwards. Of course better way
would be to improve my program that I can better control the
generation of the protocoll.

> In a real application, if you need an index on the code field
> (values 0:3) define a compound index of code, test_id. The
> extra term makes the index unique, greatly improving garbage
> collect performance.

This is a very interesting point! I have often this kind of indexes
with a lot of duplicate caused by indexes of a foreign key. For
example value stores for code fields: Then I have perhaps 10 code
values used for some thousands of recods. Something like

Table Data(
DataID Numeric (18,0),
CodeID Numeric (18,0),
Field1 ...
Primary Key (DataID),
Foreign Key CodeID REFERENCES Code(CodeID));

Is it somehow possible to define such a compound index for foreign key
fields? Or do you know another design solution for such a problem?

Daniel Achermann