Subject Re: gfix questions
Author Adam
--- In firebird-support@yahoogroups.com, "Markus Ostenried"
<macnoz@...> wrote:
>
> On 12/20/06, heineferreira <heineferreira@...> wrote:
> > I haven't had a corrupt firebird database but I was wondering how good
> > gfix is?

Corruption should not happen that often if at all. If you are
experiencing regular corruption, I would be doing some hardware tests
quick smart.

If you do experience corruption, it is frequently inside a back
version of a record, so using gbak with -g to prevent garbage
collection and then restoring solves this.

gfix is able to detect and mend a lot of 'record level' issues,
allowing a backup-restore to occur even if it failed with -g.

There are occasions where gfix can't help. If it is a corrupt index,
usually dropping and re-inserting the index will solve it. Finally,
using a data pump tool can be used if all else fails.

Then there are companies who provide rescue services for more stuffed
databases.

So gfix is not a subsitute to a regular backup routine.

> A friend of mine who has over a hundred customers who all use
> > firebird says that he insists that his customers install ups'es and he
> > says that he never even had one corrupt database.

I have no reason to doubt him. I have seen two instances of corruption
across 150 installs over four years, one solved by gfix, the other
required a data pump.

> > Do firebird databases only become corrupt during power failures?

Unless you are using asynchronous writes, power failures should never
cause corruption. That is the idea of durability in ACID. A UPS is
still a good idea though for availability, and because sometimes the
file system doesn't like it.

>
> Other reasons include faulty RAM, defunct disk drives, and running out
> of disk space. One of the most nasty ways would be to hit the file
> size limit - most likely the 2gb/4gb limit on a 32bit file system when
> no secondary db files are defined.

It should be pointed out that NTFS (unless it was formatted by a NT4
box or something) is not subject to this limit. Old file systems like
FAT32 you need to consider it, but any modern file system should
support a file size large enough to not worry.

>
> The only problem I encountered so far was with backups not being able
> to restore because a new not-null-field was added without providing
> values for existing records. But of course that's the fault the
> developer (me).
>

That is certainly a consideration, not really corruption though. gbak
does not 'validate' during a backup, so your backup routine should
include a test restore somewhere. gbak also allows you to drop the
constraints to restore it.

Adam