Subject | Re: [firebird-support] RE: Advice on storing large numbers of documents in a Firebird database |
---|---|
Author | Ann Harrison |
Post date | 2013-11-17T19:23:02Z |
On Sun, Nov 17, 2013 at 11:06 AM, marcus <marcus@...> wrote:
>
> However out of interest, would it be possible to do something like a
> gfix verification and possibly sweep after copying to check the
> integrity of the copied file or is this just as slow?
nearly as slow as a regular backup, as gfix uses the same internal
mechanisms as gbak. Ann or Helen has written about that here in the
list, or may be i've read it in the Firebird book.Gfix and Gbak each read the whole database, so they each take longeras the database gets larger - otherwise they're opposites.Gfix validates the physical structure of the database, making sure thatevery page is either in use or listed as free and that no pages are declaredas used for one purpose and actually used for another. At the record level,it checks that back version, fragment, and blob pointers point to backversions, fragments, and blobs. Gfix also validates the structure of indexes.Gbak makes a copy of the metadata and data in the database on backupand recreates the database on restore. It does not use indexes on data,so it won't detect a broken index. A gbak restore will uncover logical errorslike duplicates on primary keys, broken foreign key constraints and otherviolations of logical consistency.If I cared about my data, I'd run gbak regularly to produce backups. I'd rungfix -validate from time to time looking for structural errors. I wouldn't betoo fussy about it - the odd orphaned page or back version is just lost spaceand is the expected result of a crash shutdown. People have stoppedtripping over power cords by now, and operating systems don't crash theway they did in the bad old days, but you still might find an orphan hereor there. Finally, I'd restore the backup occasionally to find logicalerrors.Good luck,Annto