Subject | Re: Index is corrupt |
---|---|
Author | svanderclock |
Post date | 2010-03-02T10:13:05Z |
dear ann,
for exemple index is the number of room in a house
most of the house have 1, 2, 3.. 5 room
very very few houses have 20 rooms
now if someone want to see all the property with more than 20 rooms ?
the table have millions of reccords ! doing a full table scan is not acceptable. also what if someone simply want to order the result by the number of room in desc order ?
i was thinking that now in FB 2 the dupplicate in index is not anymore a problem (especially because internally FB add a random offset to the value to internally unduplicate it) ... is it right ?
> OK, having taken a quick look at the table definition and gstat reportsyes, but let explain better the purpose of such index :
> below, I suggest you drop all the indexes where the max dup is more than
> 1/3 of the total. The purpose of an index is to avoid reading every
> page in the table. If most keys in the index are present on every page,
> then they index doesn't do any good.
for exemple index is the number of room in a house
most of the house have 1, 2, 3.. 5 room
very very few houses have 20 rooms
now if someone want to see all the property with more than 20 rooms ?
the table have millions of reccords ! doing a full table scan is not acceptable. also what if someone simply want to order the result by the number of room in desc order ?
i was thinking that now in FB 2 the dupplicate in index is not anymore a problem (especially because internally FB add a random offset to the value to internally unduplicate it) ... is it right ?
> Another approach is to eliminate all the compound indexes and putthat is possible, i can concat in one simple Varchar field 2 or more column. at the end it's will not reduce the number of index, but increase the number of column in the table. do you really thing it's will help ?
> ascending and descending indexes on individual fields.
> Basically, you're not going to get the performance you want unlessbut i turn it in all the possible way, i really need all the index :(
> you reduce the ratio of indexes to data.
> >>> also do you thing that the superserver could be better than the classichmmm, the cache size... was not thinging about it (actually it's to his default of 75 pages) ... you thing increasing it can help for the insert ? but how to do it for a specific connection (using the api) ... any doc about it?
> >>> server on such bug? or i can wait firebird 2.5 (when it's will be
> >> available) ?
>
> Yes, I think you might get better results with superserver, just because
> it can support a much larger cache, and you'll need a large cache to
> add rows to all those indexes. You might try getting the MON$ stats
> for an insert - fetches/reads/marks/fetches. It may be possible to
> increase the cache size for the specific connection that's doing the
> insert.
> >
> >> Are your bad nodes all level 1? Are any level 0?aah yes you are right, it's written in the firebird.log and the bad nodes was at level 1 (is it bad or good?)
> >
> > hmm, gfix don't say me where the bad node was, simply say me 69 index error found
>
> The error in the log does tell you on which level it found the error.
> >
> >
> > yes, i have
> > 1/ the firebird.log
>
> Good. The log includes the index level
>
>yes i will need :(
> Good luck,