Subject | Re: [Firebird-general] Firebird corrupts databases more often? |
---|---|
Author | Helen Borrie |
Post date | 2004-01-16T14:59:05Z |
At 09:04 PM 16/01/2004 +0700, you wrote:
"Likely. I can tell you that Ann Harrison once told me she made a decent
amount of money as a consultant fixing broken Interbase/Firebird
database files. It would be hard to make a living in the same game for
Postgres. Now I don't think that Firebird is any buggier than Postgres.
But it comes in an embedded-library form; I'll bet lunch that most of
those data corruption problems were actually induced by crashes of
surrounding applications."
I hope Ann follows that up with Tom Lane. Whatever causes Firebird
corruptions, databases are pretty safe from "surrounding applications"
unless of course those applications pass bad data and/or the database
design is intrinsically flawed. Then you're not looking at structural
corruption, only GIGO.
other than and overflowing the 2Gb file limit on Win98, has ever crossed my
threshold as a recognised source of corruption...
- there is no threaded server process to crash. But from the point of view
of corruption, no difference. User-initiated crashes don't as a rule
corrupt databases, beyond leaving some bits and pieces orphaned that will
need to be cleared out. It's very hard to corrupt a Firebird database but,
if you're determined to, you can.
fwiw, known sources of corruption (meaning busted databases that need
tweezers and Lysol to repair) are:
- users logging into the database and doing operations during a gbak -r
(gbak -r = freefall skydiving)
- systems without no UPS, running with forced writes off
- pre-1.5 systems running on Windows with forced writes off generally (esp.
those that are 24/7)
- inconsistent path strings on Windows (now made impossible in Firebird SS
but possible in CS and a strong incentive to use aliases in all situations)
- running filesystem backups, filecopying, disk compression and other
little toys that lock disk sectors while users have pending work
- keeping the database running on a disk with an increasing tally of bad
sectors
- overflowing the file limit anywhere there is one; or running out of disk
space.
- deleting secondary files
- changing data object attributes by performing DML operations on the
system tables
- working with databases that were transported as file copies or zip files
made when the database was active
- deleting sort files (not common, and shouldn't happen, but I have seen it)
Notice, these are all things that are initiated by poor system control and
DBA ignorance. Databases are otherwise quite self-healing.
fwiw
Helen
>First of all, a forward apology. Do not mean to turn any rock thatThis is an interesting misconstruction by Tom Lane:
>shouldn't been turned. But this post from Tom Lane is a bit
>discomforting for me:
>
> http://archives.postgresql.org/pgsql-general/2004-01/msg00681.php
"Likely. I can tell you that Ann Harrison once told me she made a decent
amount of money as a consultant fixing broken Interbase/Firebird
database files. It would be hard to make a living in the same game for
Postgres. Now I don't think that Firebird is any buggier than Postgres.
But it comes in an embedded-library form; I'll bet lunch that most of
those data corruption problems were actually induced by crashes of
surrounding applications."
I hope Ann follows that up with Tom Lane. Whatever causes Firebird
corruptions, databases are pretty safe from "surrounding applications"
unless of course those applications pass bad data and/or the database
design is intrinsically flawed. Then you're not looking at structural
corruption, only GIGO.
>Ann, what is the most common causes of IB/FB database corruption fromHopefully Ann will give you some numbers on the above but none of them,
>your experience:
>
>1. running on Win9x;
>2. threadedness;
>3. embedded (application fault);
>4. older versions of IB;
other than and overflowing the 2Gb file limit on Win98, has ever crossed my
threshold as a recognised source of corruption...
>Is CS on Linux (or even Windows) the way to go for maximum reliability?CS does have the advantage of isolating *any* problems in a single process
- there is no threaded server process to crash. But from the point of view
of corruption, no difference. User-initiated crashes don't as a rule
corrupt databases, beyond leaving some bits and pieces orphaned that will
need to be cleared out. It's very hard to corrupt a Firebird database but,
if you're determined to, you can.
fwiw, known sources of corruption (meaning busted databases that need
tweezers and Lysol to repair) are:
- users logging into the database and doing operations during a gbak -r
(gbak -r = freefall skydiving)
- systems without no UPS, running with forced writes off
- pre-1.5 systems running on Windows with forced writes off generally (esp.
those that are 24/7)
- inconsistent path strings on Windows (now made impossible in Firebird SS
but possible in CS and a strong incentive to use aliases in all situations)
- running filesystem backups, filecopying, disk compression and other
little toys that lock disk sectors while users have pending work
- keeping the database running on a disk with an increasing tally of bad
sectors
- overflowing the file limit anywhere there is one; or running out of disk
space.
- deleting secondary files
- changing data object attributes by performing DML operations on the
system tables
- working with databases that were transported as file copies or zip files
made when the database was active
- deleting sort files (not common, and shouldn't happen, but I have seen it)
Notice, these are all things that are initiated by poor system control and
DBA ignorance. Databases are otherwise quite self-healing.
fwiw
Helen