Subject | Re: [IB-Architect] Next ODS change (was: System table change) |
---|---|
Author | Dmitry Kuzmenko |
Post date | 2000-11-14T16:47:41Z |
Hello, Ann!
Ann Harrison wrote:
page will be filled at 70% (including root page and pointer pages), there
will be 2 mln 365 keys instead of ~7 mlns with 100% fill factor.
that cause some problems with index depth = 4.
I don't remember field size, it was about 80-120 and data was drugs names.
(it is a medical store company).
and dropping it works differently. alter index xxx inactive with 1.5mln records
takes about 6 minutes instead of "drop index xxx" which takes not more than second.
I think there are lot of people who have tables with 500 thousand records and more
and can show index depth, avg key length, field size and other statistic data.
Than you can use this information to predict index depth on larger keys.
--
Dmitry Kuzmenko, Epsylon Technologies.
Ann Harrison wrote:
> >pages are filledNo :-) I've done the same calculation for fill factor 70%. If each index
> >only 50%, there will be only 2 million leaf pages on the third level.
>
> Why would reducing the fill level by half reduce the number of
> leaf buckets by two thirds? I would expect it to increase the
> number by about a factor of two.
page will be filled at 70% (including root page and pointer pages), there
will be 2 mln 365 keys instead of ~7 mlns with 100% fill factor.
> An average key length of 42 in a large table is unlikely unless theDon't know, but it was real data. Calculations were made for real database
> key is very odd indeed. Prefix compression works very well.
that cause some problems with index depth = 4.
I don't remember field size, it was about 80-120 and data was drugs names.
(it is a medical store company).
> There's a reasonably simple fix for half-filled index pages (othernot to the subject, but I have some interesting results: deactivating index
> than de/re/activating the index) which may be in V6 and certainly
> could be introduced easily - not an ODS change.
and dropping it works differently. alter index xxx inactive with 1.5mln records
takes about 6 minutes instead of "drop index xxx" which takes not more than second.
> That's a very simplistic computation, ignoring, as it does, prefix andit's easy to ask people for statistics. Random data is not good for such test.
> suffix compression and the miscellaneous other stuff associated with a
I think there are lot of people who have tables with 500 thousand records and more
and can show index depth, avg key length, field size and other statistic data.
Than you can use this information to predict index depth on larger keys.
> key like the page pointer and length word. More to the point, havingYou know users, they will do every ugly thing that software enables :-)
> the ability to create enormous keys is not a mandate to do so. If someone
> wants to index a long character string - say index this whole message -to be precise - less than 84 characters.
> fine. We should be clear that longer keys lead to deeper indexes and
> less performance.
>
> The fact is that multi-byte collating sequences limit our current
> keys to less than 100 characters. That's too small. We could remove
> My bet? Most of the long keys will turn out to be fine onceHope it's true.
> compression has removed trailing blanks.
--
Dmitry Kuzmenko, Epsylon Technologies.