Subject | Re: Index is corrupt |
---|---|
Author | svanderclock |
Post date | 2010-03-02T20:27:29Z |
thanks ann,
i will try tomoro some test (try to manually unduplicate index, use only one colum, etc..)
but i have stil one question, and i rather prefere to start a new thead for that: all the problems with my index is not during the insert but during the commit (16 ms to insert the data, 2293 ms to commit the transaction) !
Time to execute the sql: 16 ms
Indexed Read: 117
Non Indexed Read: 0
Inserts: 30
Updates: 0
Deletes: 30
Time to commit the data: 2293
do have any idea why the commit is soo long ?
i will try tomoro some test (try to manually unduplicate index, use only one colum, etc..)
but i have stil one question, and i rather prefere to start a new thead for that: all the problems with my index is not during the insert but during the commit (16 ms to insert the data, 2293 ms to commit the transaction) !
Time to execute the sql: 16 ms
Indexed Read: 117
Non Indexed Read: 0
Inserts: 30
Updates: 0
Deletes: 30
Time to commit the data: 2293
do have any idea why the commit is soo long ?
--- In firebird-support@yahoogroups.com, "Ann W. Harrison" <aharrison@...> wrote:
>
> svanderclock wrote:
> >
> >
> > yes, but let explain better the purpose of such index :
> > for exemple index is the number of room in a house
> > most of the house have 1, 2, 3.. 5 room
> > very very few houses have 20 rooms
>
> OK, that's very much like the female army generals ...
> >
> >
> > i was thinking that now in FB 2 the dupplicate in index is not anymore a problem
> > (especially because internally FB add a random offset to the value to internally
> > unduplicate it) ... is it right ?
>
> Not a random number. The lowest level (zero) of the index has nodes
> comprised of key values and record numbers. Upper levels have key
> value, record number, and a pointer to the next lower level with that
> value, so the record number, which is always unique in a table, is
> effectively part of the key. That reduces the cost of garbage
> collection for duplicates, but doesn't eliminate all the problems
> of index duplicates.
> >
> >
> >> Another approach is to eliminate all the compound indexes and put
> >> ascending and descending indexes on individual fields.
> >
> > that is possible, i can concat in one simple Varchar field 2 or more column.
> > at the end it's will not reduce the number of index, but increase the number
> > of column in the table. do you really thing it's will help ?
>
> You are aware that Firebird can use more than one index per table to
> optimize a query. It won't help with the sorting, but if you can get
> the result set down to a few thousand records, sorting isn't so bad.
> >
> >> Basically, you're not going to get the performance you want unless
> >
> > hmmm, the cache size... was not thinging about it (actually it's to
> > his default of 75 pages) ... you thing increasing it can help for the
> > insert ? but how to do it for a specific connection (using the api) ...
> > any doc about it?
>
> I think you can set the cache size as a parameter on the attachment -
> using Firebird's API.
>
> And yes, a cache size of 75 is nowhere near good enough for an insert
> that affects 40 indexes. Each three level index should have at least
> four pages in cache, plus a data page, plus a pointer page, a page
> inventory page, and a few others.
> >
> >>>> Are your bad nodes all level 1? Are any level 0?
> >>
> >
> > aah yes you are right, it's written in the firebird.log and the bad nodes
> > was at level 1 (is it bad or good?)
>
> I don't know whether its bad or good, but it is interesting.
>
>
> Cheers,
>
>
> Ann
>