Subject | Re: [firebird-support] Performance degradation in Firebird third time |
---|---|
Author | Pavel Cisar |
Post date | 2005-11-02T07:46:11Z |
Hi,
Bergbom Staffan wrote:
on individual fields or on CustomerId+ObjectId (which is almost always
the same). Both would result in indices with a lot of duplicates.
Inserting and especially updating such indices doesn't perform at
lightning speed. I would advise you to either drop the indices (they
wouldn't be used for retrieval anyway), or make one composite one that
wouldn't have so much duplicates.
best regards
Pavel Cisar
IBPhoenix
Bergbom Staffan wrote:
>What indices do you have on the log table ? I suppose you have indices
>
> The test is done with an app that every minute adds a record to the Db
> through a stored procedure(not optimzed but imppossible to change I have to
> live with it)
>
> The stored procedure makes a few lookups, updates and adds records to a
> table used as a history-log.
>
> The input-parameters are like: CustomerId, ObjectId, Date, Time, StatusCode
>
> Every minute is a new record added for the same CustomerId and ObjectId just
> Date and Time is changed
>
> Every night the history-log is erased.
>
> A peculiar thing is that even if the insert-time is as high as some 13
> seconds it low again if I changed CustomerId and ObjectId.
>
> Is there anyone that could please give me a clue on what is causing this and
> what I could do to maintain low insert-times?
on individual fields or on CustomerId+ObjectId (which is almost always
the same). Both would result in indices with a lot of duplicates.
Inserting and especially updating such indices doesn't perform at
lightning speed. I would advise you to either drop the indices (they
wouldn't be used for retrieval anyway), or make one composite one that
wouldn't have so much duplicates.
best regards
Pavel Cisar
IBPhoenix