Subject Re: Database corruption...
Author peter_jacobi.rm
Hi Jonathan,

--- In firebird-support@yahoogroups.com, Jonathan Neve wrote:
> I'm interested in specifically, is understanding how, when an update is
> performed, it is written to the database, and how exactly the
> multigenerationnal system works. If I perform a select, how does
> Interbase know which version of the record to show me, etc?
>
> The reason I would like to understand this more fully, is because I'm
> wondering how it would be possible to make FireBird 100%
> corruption-proof.

I'm as interested as you are, in the FB specific answers to this,
but for the whole pictures, a lot of players are involved.

Different filesystems handle the connected issues different,
depending whether they journalling (meta-data
or everything), log-structured, sync/async/soft updating etc.

And even the intentions of the filesystems can be spoiled, by
overly clever write caches in the disk drive, which may need to
be turned off to make the integrity promises of the filesystem work.

Regards,
Peter Jacobi






In other words, how can we make every single disk
> operation atomic? Naturally, there is already a degree of atomicity due
> to the transaction system, but what would be good, would be to make
> every single write atomic; that is, if it gets cut off in the middle,
> have some flag somewhere, or something that can tell us that the write
> wasn't completed, and that it can therefore be treated as though nothing
> had happened at all.
>
> For instance, if I perform several inserts into a table of my database,
> within the scope of my transaction, these inserts are atomically grouped
> together, and I either commit them all, or roll them all back.
> Therefore, if the client disconnects abruptly after the first insert,
> everything will get neatly rolled back. However, if, during one of the
> inserts, there's a power failure, the record will only partially be
> written to disk, resulting in an incorrect structure, worse than if
> nothing had been written at all. We need to have a way of telling that a
> certain operation was not completed successfully, and can therefore be
> discarded.
>
> Of course, I also know that there is an option called ForcedWrites,
> which, if it is used (which of course, it is on all my databases), makes
> the database perform synchronous writes instead of asynchronous writes.
> This helps a lot of course, because it greatly reduces the likelyhood of
> there being unwritten data pending in the event of an abrupt shutdown.
> However, it's not good enough in many cases. It's still theoretically
> possible to have corruption due to a power failure. Also, a badly coded
> SP of trigger can make the database to loop indefinately. In such an
> event (I've had this happen to me), the only solution is to kill the
> server process (killing the client isn't enough). If the SP was
> performing database writes, and not mere reads, it's very likely to
> corrupt the database, for obvious reasons (this is what happened in my
> case). And of course, there are other ways of corruption a DB as well.
>
> Do you think it would be possible to design the system in such a way
> that every write is entirely atomic? And if so, would this entirely
> solve the problem, or am I missing something. It seems to me that in
> such a case, we could even afford to turn ForcedWrites off, without it
> affecting the stability of the database at all. Unfortunately however,
> my current understanding of the internal structure of the database is
> too limited to enable me to seriously speculate in this area...
>
> Any ideas?
>
> Jonathan Neve.