Subject | Managing Firebird databases |
---|---|
Author | Bob Murdoch |
Post date | 2004-11-11T14:49:19Z |
I have a Firebird 1.5 database that's been in production for about
four years (started with IB5, moved to IB6, moved to FB). The
database size at this point is 16.5GB. The applications that use this
database have morphed from the original inception as data reporting
and analysis tools to interactive systems with direct financial impact
to the business.
I am not having any performance problems, especially since moving to a
multi-cpu machine and FB Classic. The issue that I'm dealing with is
the size of the database, and managing disaster recovery. I would
like to hear what other users have done to ensure operational
readiness of their database, limiting down-time to an absolute
minimum.
I think we have a fairly solid routine, where a scheduled job runs
every night, performing a backup, restore to a temporary folder, and
validation of the newly restored database occurs. The newly restored
database is kept for seven days, so that should we find a problem in
the production database we can hopefully find a solution within one of
the seven 'offline' copies.
The backup takes about an hour and a half, the restore takes six
hours, and the validation about 45 minutes. Although these procedures
do not require the database to be taken offline, and we do have about
an eight hour window of little to no usage, I worry that as the size
of this db grows it will exceed this window and start to have an
impact on performance of the early-morning users.
I'm wondering how the 'big boys' handle their 100GB - Terabyte
databases? Are they throwing more hardware at the problem as the size
increases? Is there something I am missing?
Thank you for sharing any thoughts or techniques that you might have
used or thought to have used.
Bob M..
four years (started with IB5, moved to IB6, moved to FB). The
database size at this point is 16.5GB. The applications that use this
database have morphed from the original inception as data reporting
and analysis tools to interactive systems with direct financial impact
to the business.
I am not having any performance problems, especially since moving to a
multi-cpu machine and FB Classic. The issue that I'm dealing with is
the size of the database, and managing disaster recovery. I would
like to hear what other users have done to ensure operational
readiness of their database, limiting down-time to an absolute
minimum.
I think we have a fairly solid routine, where a scheduled job runs
every night, performing a backup, restore to a temporary folder, and
validation of the newly restored database occurs. The newly restored
database is kept for seven days, so that should we find a problem in
the production database we can hopefully find a solution within one of
the seven 'offline' copies.
The backup takes about an hour and a half, the restore takes six
hours, and the validation about 45 minutes. Although these procedures
do not require the database to be taken offline, and we do have about
an eight hour window of little to no usage, I worry that as the size
of this db grows it will exceed this window and start to have an
impact on performance of the early-morning users.
I'm wondering how the 'big boys' handle their 100GB - Terabyte
databases? Are they throwing more hardware at the problem as the size
increases? Is there something I am missing?
Thank you for sharing any thoughts or techniques that you might have
used or thought to have used.
Bob M..