Subject Re: Index is corrupt
Author svanderclock
dear ann,

> OK, having taken a quick look at the table definition and gstat reports
> below, I suggest you drop all the indexes where the max dup is more than
> 1/3 of the total. The purpose of an index is to avoid reading every
> page in the table. If most keys in the index are present on every page,
> then they index doesn't do any good.

yes, but let explain better the purpose of such index :
for exemple index is the number of room in a house
most of the house have 1, 2, 3.. 5 room
very very few houses have 20 rooms

now if someone want to see all the property with more than 20 rooms ?
the table have millions of reccords ! doing a full table scan is not acceptable. also what if someone simply want to order the result by the number of room in desc order ?

i was thinking that now in FB 2 the dupplicate in index is not anymore a problem (especially because internally FB add a random offset to the value to internally unduplicate it) ... is it right ?

> Another approach is to eliminate all the compound indexes and put
> ascending and descending indexes on individual fields.

that is possible, i can concat in one simple Varchar field 2 or more column. at the end it's will not reduce the number of index, but increase the number of column in the table. do you really thing it's will help ?

> Basically, you're not going to get the performance you want unless
> you reduce the ratio of indexes to data.

but i turn it in all the possible way, i really need all the index :(

> >>> also do you thing that the superserver could be better than the classic
> >>> server on such bug? or i can wait firebird 2.5 (when it's will be
> >> available) ?
> Yes, I think you might get better results with superserver, just because
> it can support a much larger cache, and you'll need a large cache to
> add rows to all those indexes. You might try getting the MON$ stats
> for an insert - fetches/reads/marks/fetches. It may be possible to
> increase the cache size for the specific connection that's doing the
> insert.
> >

hmmm, the cache size... was not thinging about it (actually it's to his default of 75 pages) ... you thing increasing it can help for the insert ? but how to do it for a specific connection (using the api) ... any doc about it?

> >> Are your bad nodes all level 1? Are any level 0?
> >
> > hmm, gfix don't say me where the bad node was, simply say me 69 index error found
> The error in the log does tell you on which level it found the error.
> >
> >
> > yes, i have
> > 1/ the firebird.log
> Good. The log includes the index level

aah yes you are right, it's written in the firebird.log and the bad nodes was at level 1 (is it bad or good?)

> Good luck,

yes i will need :(