Subject Re: [firebird-support] Re: Error loading DB using FB 2.1.3
Author Norman Dunbar
Evening Steve,

> That is all well and good but you would have to test every back up every
> time to make sure it was restorable.
Yes. Ok, possibly not always convenient or practical, however, at least
some of your backups (one for each database perhaps, per week?) could be
used to create a new database just to test the recoverability of said
backups.

If yo never test a backup file, yo are never going to know that it's
usable - if you test some, at least you have an idea that your processes
are working perhaps?

In my day job I'm an Oracle DBA and while it's great to know that your
backups worked ok it's even better to know that they are actually usable
to recover a database well before you might actually need them!

Trust me, the feeling you get when the database recovery comes up with
"file not readable" is not a good one!


> We often run into situations where
> duplicate keys that don't exist in the original database's index cause a
> restore to fail.
That's interesting. Are you saying that a restore, of a database with no
duplicates causes duplicates? I've had that before - on Oracle - when
the database language was incorrect and German characters with accents
were losing the accents and becoming unaccented - and that caused a
duplicate.

However, the restore testing that we did, showed us where we had a
problem that needed to be fixed.


> Somehow, records that can be selected using a NATURAL
> plan do not exist in the primary index.
Again, interesting. Is your "primary Index" actually a Primary Key
Constraint? Or a Unique Index (or Unique Constraint?) because those
Unique indexes/constraints do allow "duplicates" - provided all columns
in the index are NULL - because those records are not actually indexed.


> Testing every backup for
> restorability is not exactly practical in the real world where I have
> about 100 servers with a couple of hundred databases to support.
I agree. But some backups need to be tested - create a clone of your
database with a "gbak -create ..." for example. Run some automated
testing scripts or whatver.


> If we
> tested every back up we took we would do nothing else. Besides, some of
> the databases are in the 100GB+ range. We would never finish one test
> restore before we had to start the next one.
>
> Also, isn't that rather like saying you should run a complete set of
> diagnostics on your car every time you take it out of the driveway?
Not really. But in my owner's manual it advises me that I have daily
checks, weekly checks, monthly checks and so on.


> Telling me to test every back up seems like documenting bad behaviour
> rather than fixing it.
Possibly. If I create a successful backup with gbak or nbackup, all well
and good - maybe nothing actually went wrong with the backup. However,
what if something happened during the actual write to the device?
Windows is known for not quite flushing all data to the disc from time
to time, so it's possible that gbak/nbackup have completed a backup, but
the underlyinginfrastructure has put the boot in and rendered the file
unusable.

That's a situation I'd like to know about, personally, rather than
finding out when I really need *that* particular backup.

I have Oracle databases in the terabyte range that I still have to test
that the backups are usable. Now that's a boring task of a Monday
morning! ;-)


Having said all that, yes, it would be nice if a recovery could tell me
where I had a problem. There's never too much information in an error
log/report - unless it's in a Java exception stack dump I suppose!


Cheers,
Norman.