Subject | Re: [firebird-support] Database statistics |
---|---|
Author | Ann W. Harrison |
Post date | 2005-06-17T18:13:37Z |
maxirobaina wrote:
the index was created automatically as part of a foreign key
declaration in which this table is the child (aka referencing table).
This is the second index defined for the table - the number in
parentheses is the index id, and the ids start at 0.
suggests that increasing the database page size may improve performance.
The width of the index at its bottom level is 1799 pages. There are
1185722 entries in the index. Sometimes you'll see more index entries
than there are records in the table. That's an effect of
multi-generational indexes - if an indexed fields is updated, a single
record will have two (or more) index entries.
amount of space taken in the index to represent index key values. A
zero average length means that prefix and suffix compression are being
used heavily so the index is very dense. However, prefix compression is
most successful when the index contains lots of duplicates. In this
case, there are only 1624 distinct values among of the 1185722 entries.
So, on average, there are 730 instances of each value. That's not
great. What's worse is that the distribution is uneven, as shown by the
max dup: 82105. The most heavily duplicated value has has eight-two
thousand, one hounded and five instances.
My rule of thumb is that more than ten thousand duplicates on any single
value will cause garbage collection problems. Version 2 has a new index
structure that eliminates the problem. Were it me, I'd eliminate the
constraint for now.
and may suggest that the actual key is fairly large - upper levels are
less susceptible to prefix compression, and a large key reduces the
packing density of index entries... so those 75 pages might be level 2
of the index. Or they may be pages that split recently.
the leaf level rather than lack of compression.
That's pretty good. But it's good because prefix compression is working
well, and prefix compression is working because of the large number of
duplicates, which is bad.
Regards,
Ann
> Hi,The name of the index is RDB$FOREIGN125. From the name, I infer that
>
> Anybody can help me and explain me how read this index statistic ?
>
> Index RDB$FOREIGN125 (1)
the index was created automatically as part of a foreign key
declaration in which this table is the child (aka referencing table).
This is the second index defined for the table - the number in
parentheses is the index id, and the ids start at 0.
> Depth: 3, leaf buckets: 1799, nodes: 1185722The index has three levels, which is good - a depth more than three
suggests that increasing the database page size may improve performance.
The width of the index at its bottom level is 1799 pages. There are
1185722 entries in the index. Sometimes you'll see more index entries
than there are records in the table. That's an effect of
multi-generational indexes - if an indexed fields is updated, a single
record will have two (or more) index entries.
> Average data length: 0.00, total dup: 1184098, max dup: 82105Average data length of 0.00 is good and bad. The data length is the
amount of space taken in the index to represent index key values. A
zero average length means that prefix and suffix compression are being
used heavily so the index is very dense. However, prefix compression is
most successful when the index contains lots of duplicates. In this
case, there are only 1624 distinct values among of the 1185722 entries.
So, on average, there are 730 instances of each value. That's not
great. What's worse is that the distribution is uneven, as shown by the
max dup: 82105. The most heavily duplicated value has has eight-two
thousand, one hounded and five instances.
My rule of thumb is that more than ten thousand duplicates on any single
value will cause garbage collection problems. Version 2 has a new index
structure that eliminates the problem. Were it me, I'd eliminate the
constraint for now.
> Fill distribution:None of the index pages is less than 20% full
> 0 - 19% = 0
> 20 - 39% = 1One index page is less than 40% full, probably the top of the index.
> 40 - 59% = 75Seventy-five pages are between 40 and 59% full - that's not a problem
and may suggest that the actual key is fairly large - upper levels are
less susceptible to prefix compression, and a large key reduces the
packing density of index entries... so those 75 pages might be level 2
of the index. Or they may be pages that split recently.
> 60 - 79% = 5Five pages are between 60 and 79% full - here I would suspect splits in
the leaf level rather than lack of compression.
> 80 - 99% = 1718The vast majority of pages - 1718 out of 1799 - are over 79% full.
That's pretty good. But it's good because prefix compression is working
well, and prefix compression is working because of the large number of
duplicates, which is bad.
Regards,
Ann