Subject Re: [firebird-support] firebird performance - help
Author Aage Johansen
cosamektoo wrote:
> I'm using firebird 1.5 on solaris on intel.
> My application use among other things,sampling tables in which data is
> inserted and deleted from all the time.
> When my database file becomes large(say 320M) all the transactions are
> slower.
> If I backup and restore the database, the file becomes much smaller(say
> 5M) and the transactions are fast again.
> I can not backup and restore the database on a regullar basis since my
> server needs to be online all the time.
> Any suggestions ?

I think Ann wrote an answer to you, but it ended up somewhere in the thread:
"Re: [firebird-support] Re: Performance when deleting a lot of records".
Try to find it, it starts with:

As has already been said, having a database grow from 5Mb to 320Mb
and shrink back after a backup/restore is an indication that something
is going wrong. For some reason, garbage collection is not happening.
Normally the reason is that there is a long-lived snapshot mode
transaction that could see early versions of records. One way to find
out is to look at the output of a gstat - I use the command line
version. It must be run on the server, I think.

Start with gstat -h to get header information.

Check the difference between the next transaction and the oldest
snapshot - they should be reasonably close. If they're a couple hundred
thousand transactions apart - or even tens of thousands apart, you need
to find out where the long-lived transaction is started and stop it or
change it to a read-only read-committed transaction.

Aage J.