Subject Re: [firebird-support] Re: Unique Constraints being violated in Firebird
Author Ann W. Harrison
Adam wrote:
>>
>> ...the failure would have prevented
>> the transaction that entered the bad value from being committed. An
>> uncommitted duplicate would not cause an error.
>
> Hi Ann,
>
> Is it then safe to assume that the OS will flush the cache in the
> order it is written?

No, not at all. And you may be right - there might be an
unfortunate order of events that would allow a duplicate to
appear.

The record will be stored on a data page in cache before the
uniqueness check is done, because the new index entry needs
the db_key which is computed from the data page and record
index offset - both of which are determined when the record
goes on page. If the index check fails, the work of the
statement will be undone before command returns to the client.

I guess it's vaguely possible that the data page with the bad
record could be written to disk for some reason between the
time the record was written to it and the time that the index
check failed. Intermediate stuff is often written to disk.
Then, by bad luck, the page might not be written again with
the record removed. And, again by bad luck, the transaction
could succeed after that statement failed, and the TIP could
be written to disk without the other page affected by the
transaction.

A problem with that model is that the data page would have
to be written to disk in a very narrow window and the TIP
would also have to be written. The version of Firebird
in question lets Windows decide when to flush the cache,
and Windows generally does it on process shutdown, which
makes the two successful writes even less probable.

If someone had that much bad luck, I'd expect to find lots of
other anomalies - missing records, deleted records coming back,
general chaos and probably corruption - wrong page types etc.


Regards,


Ann