Subject Re: [firebird-support] Re: Performance when deleting a lot of records
Author Ann W. Harrison
Christian Kaufmann wrote:
>
>
> I have duplicates in some indexes. Mostly there are 2-10 records with
> the same key. Should I add a field to make the key unique in that
> case?

No. If you had 20,000-100,000 records with the same index key value,
then you'd absolutely need to fix those indexes.
>
> I noticed in the statistics, that I have 3'000'000 nodes in the index,
> but the table has only 2'100'000 records.

Yup. That's the garbage collector getting behind in its work. What
happens if you run a "select count (*)" on that table after you do the
delete. One possibility is that everything stops, so run the experiment
with care.

> Do the nodes represent the
> number of possible records in all allocated pages for that index?

There's a node in the index for each distinct indexed value held by any
record in the table. For example, if you have a record with field_1
indexed, and you store it with field_1 = 'abc', then modify it to
field_1 = 'def', you'll have two nodes in the index for that same
record. If you then modify it back to field_1 = 'abc', you'll still
have two entries for the record. When the old 'abc' value is garbage
collected, you'll still have two entries. When the old 'def' value is
garbage collected, you'll be back to one entry for the record.
>
> Index depth is 3, my page size is 4096. I read somewhere, that
> increasing the page size helps with performance. How can I change the
> page size of my database?
>
You can change the page size with a backup/restore cycle, but you don't
need to bother. Depth of three is normal and performs OK.

Regards,


Ann