Subject Re: Shrink Firebird
Author Adam
--- In, "Kurt Schneider"
<kjundia@...> wrote:
> Hi...
> Really the problem is hard, and the solution is really complex. But,
i thing
> and future versions os Firebird this "shrink" solutions is necessary
or more
> easy for database administradors. See, i'm over 200 database user
> and various brazilian states. All suport occurrs remotly. Time an
Time must
> database backup and restore, and this operation consume several time for
> support and clients.
> I'm not justific my geografic client problem, but i see the "shrink"
was a
> new and easy way to database size reduce.
> I'm not understand who configure or set to off for not log file
create in
> firebird. Other question pertinent is: When my database have 8 MB, i'm
> using gbak and see create and file backuped with 1,2 MB. Whel, when i'm
> restoring the backup de file have the similar (7,8 Mb) size. What
have the
> gbak to reduce significant size of database?
> I'm sory my bad english, but, the question and dialog is fine.
> Regards
> Kurt.

Hello Kurt,

I believe the opposite, that shrinking of a database is becoming far
less important.

For about AUD 100 (USD 75 / 60 Euro), you can pick up a 200GB hard
drive. This will vary depending on the taxes, transportation and other
costs. SCSI drives are more expensive.

Moving to other storage devices, the databases you seem to be dealing
with can fit on a SD card.

My point is that it is much cheaper for me to buy a larger hard drive
than to pay someone to go onsite to run a backup restore or shrink

A shrink procedure will take longer than a backup, yet it does not
have the benefit of rebalancing the indices.

In answer to your question on gbak. The priority of a live database is
to store and retrive information quickly.

The database file will contain old record versions which are unable to
be garbage collected because you still have transactions running that
need it. The gbak file does not have these old record versions. The
database file will contain a built index, where the gbak file only
needs to store which fields make the index.

The database will reuse space, so do not panic if it rises sharply at
the start, it will hit an equilibrium and grow with the amount of data

I am a bit worried that you are talking about remote sites, as if you
need to transfer large database files around. You must understand that
you can not just file copy a database that is in use because you will
get corruption. You can run gbak to get a fbk file, compress that fbk
file using zip or 7zip and get something much smaller than 1.2MB,
probably somewhere around 250KB.

Firebird does not have an external log file to manage transaction
recovery. It writes the changes directly to the database (if possible
to the same page as the old record version). What Ann was meaning is
that you can not accuse Firebird of having a much larger file than
another database, if you do not count the size of the transaction log
in that database.

MGA databases by definition must store more data than locking
databases. This is because with a locking database as you make a
change, the old value is replaced, where MGA must keep both the old
and new values available for a period of time. This feature is what
allows for excellent performance and stable reads, and in case you are
wondering, no MGA can not be turned off.