Subject | Re: [ib-support] Backing up / Restoring large databases |
---|---|
Author | Jason Chapman (JAC2) |
Post date | 2003-04-03T09:15:58Z |
Hi Rod,
Backup tends to be pretty linear in time, the more Blobs the faster the
backup the more small records, the slower, proportionally. So you should be
able to estimate decay rates. Always remember to turn off garbage collection
for the backup when precursor to restore, it can take hours off.
We have a site that has a 14GB DB that has benefited from faster disks and
faster processors when doing the backup and restore. We haven't actually
tested backup / restore cycle with a single vs dual, I would doubt it would
make much difference as both processes should be disk bound, not processor
bound (with the exception of index creation).
Our largest supported DB is 110GB and is full of images, so therefore large
number of Blobs. Problematic recent DB's is one with > 100million records,
with 1 poor index that failed to restore due to single temp dir and the
index > 2GB.
Techniques I would consider to keep within you gap.
1) Restore without activating indices.
- turn the absolutely required ones on
- declare the DB live
- activate the remaining indices
Obviously performance during activation will be poor, but this tends to be a
slack time, when the system comes back on line.
2) Use replication to produce a second, near live DB (I recommend this for
large DB's anyway for disaster recovery). The server for the 2nd can be
Workstation spec.
- have a procedure for making the second DB live
- run for a while
- backup and restore second db
- stop users from using live
- ensure replication is depleted
- make the 2nd db live and vice versa
- let users back in
You will have a very clean live DB and down time would be more like an hour
(if you have to copy the 2nd db to the live server).
Shameless plug
I have had to do this a number of times and if anyone else is coming up
against the same kind of issues, could I suggest it is time to review your
system, DB and day to day routines for the DB. I know a really great
consultant in the UK (anywhere if you pay travel), who can assist in this
process and breathe new life into tired DB's and clunky code. Best to date
is changing a 12 hour report to 1 minute with no IB/FB code changes (just
Delphi)
/Shameless plug
Jason Chapman
JAC2 Consultancy
Training - Development - Consultancy
Delphi, InterBase, Firebird, OOAD, Development lifecycle assistance,
Troubleshooting projects, QA.....
www: When I get round to it....
Mob: (+44) 07966 211 959 (preferred)
Tel: (+44) 01928 751088
We are in the process of changing ISP's, unfortunately this may result in
bounced e-mails for a short period of time some time in Feb-Mar03. If this
occurs, please redirect your mail to jason@....
""rodbracher"" wrote in message news:b6gp1i+eon1@......
Backup tends to be pretty linear in time, the more Blobs the faster the
backup the more small records, the slower, proportionally. So you should be
able to estimate decay rates. Always remember to turn off garbage collection
for the backup when precursor to restore, it can take hours off.
We have a site that has a 14GB DB that has benefited from faster disks and
faster processors when doing the backup and restore. We haven't actually
tested backup / restore cycle with a single vs dual, I would doubt it would
make much difference as both processes should be disk bound, not processor
bound (with the exception of index creation).
Our largest supported DB is 110GB and is full of images, so therefore large
number of Blobs. Problematic recent DB's is one with > 100million records,
with 1 poor index that failed to restore due to single temp dir and the
index > 2GB.
Techniques I would consider to keep within you gap.
1) Restore without activating indices.
- turn the absolutely required ones on
- declare the DB live
- activate the remaining indices
Obviously performance during activation will be poor, but this tends to be a
slack time, when the system comes back on line.
2) Use replication to produce a second, near live DB (I recommend this for
large DB's anyway for disaster recovery). The server for the 2nd can be
Workstation spec.
- have a procedure for making the second DB live
- run for a while
- backup and restore second db
- stop users from using live
- ensure replication is depleted
- make the 2nd db live and vice versa
- let users back in
You will have a very clean live DB and down time would be more like an hour
(if you have to copy the 2nd db to the live server).
Shameless plug
I have had to do this a number of times and if anyone else is coming up
against the same kind of issues, could I suggest it is time to review your
system, DB and day to day routines for the DB. I know a really great
consultant in the UK (anywhere if you pay travel), who can assist in this
process and breathe new life into tired DB's and clunky code. Best to date
is changing a 12 hour report to 1 minute with no IB/FB code changes (just
Delphi)
/Shameless plug
Jason Chapman
JAC2 Consultancy
Training - Development - Consultancy
Delphi, InterBase, Firebird, OOAD, Development lifecycle assistance,
Troubleshooting projects, QA.....
www: When I get round to it....
Mob: (+44) 07966 211 959 (preferred)
Tel: (+44) 01928 751088
We are in the process of changing ISP's, unfortunately this may result in
bounced e-mails for a short period of time some time in Feb-Mar03. If this
occurs, please redirect your mail to jason@....
""rodbracher"" wrote in message news:b6gp1i+eon1@......
> We are at a point where a number of our customer Firebird v1.0 GDBs
> are 1 - 2 GB in size. The backups take +- 40 min and restores even
> longer. The growth rate is fair and I expect the GDB size to be 6 - 8
> gb in a couple of years. Is this going to mean a backup/restore cycle
> of near 8 hours ? A lot of these operations are near 24/7 - 8 hours
> is a lot of down time.
>