Subject Re: [firebird-support] database ballooning problem
Author Helen Borrie
At 04:07 AM 6/09/2005 +0000, you wrote:
>Our client server Delphi software is currently using Interbase
>database with FB1.5 services. A couple of our new client's database
>growths from some MBs (e.g.: 23MB) to GBS (e.g.: 4GB) with very little
>data compare to others. I've collected some statistic as shown in the
>following:
>
>Number of users: 2
>
>Date: 29/04
>Database size: 4GB
>table Tfile count: 66
>table Lassign count: 193 (only one BLOB field)
>Action: Run Backup/Restore
>As a result of Backup/Restore proces, the database size was reduced
>to: 23MB
>
>Date: 06/09
>Database size: 2.483GB
>Number of transactions: 54,109
>table Tfile count: 231,
>table Lassign count: 1370 (only one BLOB field)
>
>The software does not run batch process. Does anyone know:
>
>1) What could cause the problem?
>
>2) I was suggested that it might be to do with the
>IBODatabase.Savepoint in our Delphi code, because SavePoint keeps old
>state of the records and creates new version, and the old ones are not
>garbage collected unless you run backup/restore. Is this true?

No, per se, it is not true.

However, if you never use anything except Autocommit transactions (or IBO's
Savepoint method, the "manual" equivalent) your transactions will never get
freed for garbage collection. They are aspects of the same problem: that
CommitRetaining ("soft commit") gets called; a hard "Commit" never
occurs; and the transaction's garbage just remains "interesting" to the
database engine for ever. This looks to be the case here.

The Autocommit feature was a legacy from the old Paradox/BDE model,
designed to dumb down transaction-awareness for Delphi developers and give
them a way to use InterBase, MSSQL, etc. as though they were Paradox. It
has its uses but it should be used with a lot of discretion and only in
places where it is useful and where there are opportunities to enforce hard
commits with reasonable frequency.

If you are determined not to program your way out of this situation, you
can alleviate it some, by purposely running frequent sweeps when none of
your users is logged in and holding up access to the garbage. Given the
volume of garbage you are stacking up (getting to the point where you have
170 times as much garbage as you have live data), I would venture to guess
that a reduction in database growth would not be the only benefit to be
gained from a daily sweep.

./heLen