Subject | Re: [firebird-support] Database File Size |
---|---|
Author | Helen Borrie |
Post date | 2004-08-18T01:26:09Z |
At 12:44 PM 18/08/2004 +1200, you wrote:
during backup, so it will release any space left by updates that is able to
be released; but GC can't release space held by delete stubs.
system) can't clear garbage that is stuck by never-ending transactions.
There are two ways to have a sweep happen: either set a non-zero sweep
interval to get an automatic sweep whenever the "gap" between the Oldest
Transaction and the Oldest Active Transaction passes the threshold
configured in the sweep interval; or explicitly run a sweep from gfix,
ideally when no users are logged in.
Since a well-behaved system will never get to the point of triggering of an
automatic sweep, you should make a gfix sweep part of your regular
cleanup. Also note that sweeping is the only way (other than restoring, of
course) that you will release the stubs of deleted rows back into the
available space.
single-file database is limited only by the amount of available disk
space. The server will just request more blocks of disk as it requires.
stored then, one way or t'other, you are accumulating garbage faster than
you're clearing it.
/heLen
>HI all, We have a database used in a real-time environment, with aNo. Backup doesn't sweep. Garbage collection (where possible) is performed
>reasonable amount of data being collected. The data is replicated to a
>second copy of the database ("offline"), and the "online" database is
>purged daily, retaining the last 7 days worth of data. The database file
>size after backup-restore is about 50Mb. Each day it continues to grow,
>even though we are backing up daily (sweeping is part of the backup as we
>understand).
during backup, so it will release any space left by updates that is able to
be released; but GC can't release space held by delete stubs.
>I would have expectated that the database would reach a file size whereTheoretically true; although even a sweep (which is not happening on your
>any new records would be using the space cleared by the sweep process, and
>that the file size would reach a limit and grow no further.
system) can't clear garbage that is stuck by never-ending transactions.
There are two ways to have a sweep happen: either set a non-zero sweep
interval to get an automatic sweep whenever the "gap" between the Oldest
Transaction and the Oldest Active Transaction passes the threshold
configured in the sweep interval; or explicitly run a sweep from gfix,
ideally when no users are logged in.
Since a well-behaved system will never get to the point of triggering of an
automatic sweep, you should make a gfix sweep part of your regular
cleanup. Also note that sweeping is the only way (other than restoring, of
course) that you will release the stubs of deleted rows back into the
available space.
>However it seems to grow continually, and not reach a particular file size.If the filesystem supports an unlimited file size, then the file size on a
single-file database is limited only by the amount of available disk
space. The server will just request more blocks of disk as it requires.
>Due to the fact that the software is part of a 24 x 7 control systemYup; if the file size isn't consistent with the amount of data being
>(attached to a Scada System), it is not going to be that easy to automate
>a daily backup AND restore. We may have to do that, but for the moment I
>cannot understand why the file si
> ze is continuing to grow. Any ideas?
stored then, one way or t'other, you are accumulating garbage faster than
you're clearing it.
/heLen