Subject Re: Index is corrupt
Author svanderclock
> > meus latin est non valde bonus :)
>
> Quidquid latine dictum sit altum videtur.
> (IIRC)
>
too strong for me ;)



> Having a lot of indexes will have a cost. If you insert 300.000
> records it might be worthwhile to investigate whether de-activating
> indexes during the insert prosess will improve the situation. On the
> other hand, deactived indexes might not be acceptable for other uses
> during this. I assume you have considered whether all indexes need
> to be present.

yes, you are right, deactivating the index is not possible because the table must be all the time usable. also we considered very lot the neccessary of all index and yes we absolutely need all of them



> The "Depth: 4" indicates that increasing the page size of the
> database might be a good thing.

even if we have only 4 index with Depth 4 for 2000 index ?



> Index TRANSAC_SEARCH_101_IDX (3)
> Depth: 3, leaf buckets: 5703, nodes: 5150351
> Average data length: 0.00, total dup: 5150293, max dup: 2779803
> Fill distribution:
> 0 - 19% = 1
> 20 - 39% = 0
> 40 - 59% = 0
> 60 - 79% = 0
> 80 - 99% = 5702
> >>
>
> A max dup of 2.779.803 could be a source of trouble with older
> Firebird versions with updates/deletes (and maybe not very efficient
> at selects). Maybe it's all right in your situation. The "trouble"
> would be slowness, not corruption, though. This could usually be
> fixed by including the primary key (or a unique field) in the index.
> Is there a primary key or a unique constraint for in TRANSAC_SEARCH_RES_FS_A_2?

hmmmm, could be an idea !
TRANSAC_SEARCH_101_IDX it's a field on NB_ROOM (for real estate). and as you imagine most of the time NB_room is between 1 to 10, so we will have a lot of duplicate. but what we can do ?

one idea is to remove completely the dup could be (for exemple) :
NB_room = Nb_room * 10000000 + random (1000000)

in this way NB_room
1
4
4
3

will become (for exemple)
13567890
43232320
41231210
31231232

and i my query i simply need to update
nb_room > 3 and nb_room < 5 by
nb_room > 30000000 and nb_room < 50000000

but it's a lot of work to do this, before to do it i would rather want to be sure that it's will help to not have any duplicate in the index ... can you confirm it to me ?


thanks
stephane