Subject Re: [firebird-support] Database performance slowdown
Author Thomas Steinmaurer
Neil,

> We are investigating an issue with one of our systems where it gradually
> slows down during the course of a day.
>
> We have our epos system that we write ourselves using Delphi XE2 and
> the IBX components.

How does your transaction model/layer look like in your client
application with IBX?

If you are not starting an explicit transaction yourself, IBX is running
in "auto commit" mode, wrapping each statement into it's own transaction
context and commit the statement with a commit retaining, which
basically preserves the physical transaction context and makes your
transaction counters (OIT/OAT/OST) stuck.

Any chance to run gstat -h when your experience such a response time drop?


> We have a group of tables that are updated during the end of sale save
> process.
>
> Typically this whole save process takes milliseconds, but on one
> particular site we see it taking several seconds and occasionally taking
> up to 35 seconds.
>
> We have done some digging into the behaviour of Firebird, we are
> currently using v2.5, and it appears that Firebird needs to check the
> entire chain of back versions when reading a record from the database.
> Is this the cooperative Garbage Collection process running?

If you are running Classic or SuperClassic architecture, then yes. In
these architectures, each statement is also doing garbage collection in
its execution context.


> It seems like it would be logical for a lookup to only search back
> records until it finds the record that is supposed to be visible to it,
> but it appears to go all the way back.

Visible from a transaction context. Beside the transaction ID, also the
used isolation level needs to be taken into account. With using commit
retaining behind the scenes, e.g. due to auto commit, you basically end
up in accumulating back record versions.


> This causes us a slowdown if there are a lot of back versions still
> active, and flags up data as corrupted if something untoward has
> happened to them, even if the current committed version of the record
> seems to be valid.
>
> Can anyone confirm this behaviour and any way to work around it?

To be sure, if the slowdown for a single statement is driven by
cooperative garbage collection, you can use the Trace API and its
detailed statistics output (something like "backouts", "purges",
"expunges") per statement. The MON$RECORD_STATS monitoring table is
another source for that information, if your statement is still
available through MON$ tables.


--
With regards,
Thomas Steinmaurer
http://www.upscene.com/

Professional Tools and Services for Firebird
FB TraceManager, IB LogManager, Database Health Check, Tuning etc.