Subject Re: [firebird-support] Database size and garbage collection
Author Ann W. Harrison
ed_heiberger wrote:
>>Even when that condition is meet, the record is only garbage
>>collected when some transaction tries to read it, so gstat may be
>>counting records that are awaiting garbage collection.
> Can anyone help explain the "Even when this condition ..." part?
> Why would my application ever try to read a record it has deleted?

Normally that happens during a range or exhaustive retrieval of a table.
One way to force it to happen is to use gbak to backup the database or
the sweep option of gfix.
> Because, I have a similar problem where the database file grows
> constantly. The application does a lot of record deletes. I'm
> using the native Firebird C interfaces in my application.
> I "fixed" the application by adding an SQL query after every 100
> deletions. A simple query "select * from mytable" seems to do the
> trick.

A nightly backup would also work. The database file might be bigger
than it would be with more frequent "select *" statements. Select count
(*) is cheaper. If you know some range of values that are common to
deleted records, select count(*) from table where <condition> is faster yet.

> When I run the same test with the "fixed"
> application, the database file grows to a fixed size and stays
> there.

You'll never see the file get smaller, but the rate of growth will go
down if you free up space occupied by deleted records.
> Here's gstat output after a test run where the database grew through
> out the test cycle.
> Database header page information:
> Flags 0
> Checksum 12345
> Generation 50041
> Page size 4096
> ODS version 10.0
> Oldest transaction 50036
> Oldest active 50037
> Oldest snapshot 50037
> Next transaction 50038

These numbers are perfectly fine. The gstat I was describing uses the -r
and -a options and gives a long report describing data an index
distributions etc.