|Subject||RE: [firebird-support] Re: Create shadow's commit on large database locking up system until finished.|
> > > Ron James.once a week! this is a bit of overkill IMHO.
> > how often are you making shadows? I would have thought you create
> it when
> > you restore and that's it.
> > Alan
> Our system automatically backs-up and restores once a week. At that
> point we need to re-create the shadow or have the restore create it.
> One twist is that we have the system setup to do a complete fail
> over to the secondary disk when a failure of the 1st disk is
> detected and automatically activate the once shadow database into
> the primary database. Our hopes were that the system would be
> available for the users transactions as much as possible since the
> users are concerned that the backup/restore weekly process can take
> overnight on large databases. If Firebird blocks access to the
> database while it creates much of the shadow, we will have to
> potentially re-think the recovery strategy.
backup and test restore once a week by all means - I do it nightly. If I get
an error in the test restore I'm on it the next day.
You're aware that any corruption in the master is mirrored to the shadow
instantly. aren't you.
You seem to have the backup/restore cycle mixed in with the shadow strategy
I gave away shadowing in 1998. I use replication now in lieu of tape backups
and shadows. Tape backups are there but it's too old to and only ever serves
as history. I get a live database on a live server with replcation with no
disks to swap - no threat of a disk hiccup (in the dying throws of a disk
dying) corrupting a good copy of the database, the swap to the next server
is a simple task in the preferences of the application which every user is
trained for. I can literally take my time to wander over to the server to
see what the problem was. It's happened a couple of times (hardware failure)
in the last few years. I don't sweat any more. If I relied on shadows,
they'd need me or someone equivalently knowledgeable to swap the harddisks
around, reboot etc before they're back on their feet.
One caveat is if your application has very heavy writing, then replication
is not so good, the time taken to "catch up" is slow. I'm thinking of
converting the FBReplicator over to using remote reading but the embedded
server to write to an adjacent database file. That would speed things up a
lot I think.