> Looks like you are building application for a lot of concurrent users,
> perhaps a web app?
exactly, and actually we have very lot of cocurrent users and everydays the fb_inet_server crash (i thing because the concurrent commit that are too slow)
>
> Try dropping all your indexes and just create 1 ASC and 1 DESC for each
> column. Make sure their statistics is kept up to date (it will be at the
> moment you create them if data is already in) and then try one of your
> queries. Firebird can combine indexes.
i promise i already try something like this, it's work in 75% of the case but in 25% the result are too slow and in 5% it's even a nightmare (sometime more than 30min to retrieve the set)
>
> > where Agent = xxx and nb_room > 30 and id_location=yyy order by agent desc, id_location_desc, price desc
> > plan order index (IDagent_IDlocation_Price, nb_room)
>
> Do you always have "where agent = xxx"? If yes, maybe it would be a good
> idea to partition the tables as Dimitry suggested.
it's already partitioning, we are thousand of agent, i can not create thousand of table. but some have few row, and other have thousand of thousand of rows, hard to gave the same rule for everyone :(
>
> If all this fails, I have another suggestion for your problem:
>
> - have 2 tables, one with all the 30 indexes, other without
> - SELECTs always work in "indexed" table
> - INSERT works on table without any index
>
> Have a small application listening to INSERT event (broadcasted by
> POST_EVENT in AFTER INSERT trigger on "not-indexed" table) and have it
> do the insert. Delete the inserted records afterwards (maybe
> periodically). This other table would need to have an extra ID column to
> be the primary key for quick deletes.
>
> If it is a application where delay of newly inserted records is only
> important to user that did the INSERT, maybe you can do additional query
> just to add "his new records" to the dataset which is to be shown on screen.
many thanks for the suggestion, and actually i do like this already. the problem, is that i have around 300 000 rows to updates everyday, and it's take hours of hours of hours of processing everydays :( this also most of the time (i don't know why, every 2-3 days server is crash) result in database corruption (the database grow by 1GO BY DAYS! but it's always reduce after a backup restore)