Subject Re: [firebird-support] Possible bug with CHECK constraint
Author Helen Borrie
At 08:09 AM 3/10/2004 +0000, you wrote:

>Helen,
>
> > >I would consider this a bug.
> > >What is the sense of having CHECK-constraints when I can create
> > >data, which fails that check?
> >
> > With Roman's example, it doesn't fail the check that's defined. The
> > incoming value isn't written anywhere yet, so how could it possibly
> > participate in the Max() aggregation? At the point the check is
> > done, the check constraint is entirely met.
>
>Strictly speaking, after adding a constraint database is not
>consistent (there are rows that do not satisfy constraint). And since
>database is not consistent, commit should fail (transaction is an
>atomic piece of work that brings database from one consistent state to
>another, that's "C" in ACID).

Oh, I do agree with you about this one. But this was not the situation
that your example illustrated. The check constraint worked exactly as defined.

>And even more, after adding this constraint any subsequent insert will fail.

Yes - isn't that the idea of the CHECK constraint? It's not an integrity
constraint. Its purpose is solely to validate incoming data *before* it is
written anywhere.

>Baackup/restore will fail as well.

Backup (unfortunately and wrongly) doesn't fail when there is inconsistent
data. But, even if gbak *did* perform consistency checks per integrity
constraints, it won't catch data that are inconsistent with CHECK
constraints, since check constraints are, by nature, performed only on
external data.

Restore, correctly, does fail when the data that it attempts to restore
fails a check constraint. So we have a known problem


>Actually this construct was "invented" to emulate exception on commit
>(I wanted to test JayBird's behavior in this case), but I was
>surprised when no exception happened.

So are you saying you still don't understand why the check constraint
didn't prevent you inserting the record with id=20?


>So, from my point of view:
>
>a) this is a bug, probably similar to the bug with adding NOT NULL
>constraint to the columns with NULL values;

What you observed was not a bug, it was correct behaviour for the check
constraint as you defined it.

1. If you had reported the situation where you altered a table that already
contained "invalid" data, and added that check constraint without getting
an exception then, yes, that is a pitfall. Not everyone approves of making
a distinction between a "pitfall" and a "bug". I happen to think that, if
a feature is implemented in such a way that it is allowed to cause
inconsistencies in existing data, then that's a bug.

2. It is not possible to add a NOT NULL constraint to an existing column,
just because there is no way to guess what should replace nulls in the
existing data. That is correct, IMO, since it protects consistency; yet we
frequently see people in this list complaining that it is a bug.

3. Ideally, alter table add column blah blah not null should be
accompanied by a mandatory default clause and the engine should
automatically populate the column before the operation completes. It
doesn't; and if (when) it is implemented, it will add another case where
an "Object in Use" exception would need to occur if the request were not
submitted with exclusive access. What we currently have to do - hop in
immediately after committing the metadata and write a default throughout
the table to the new column - is fraught with risk, both human and systematic.

>b) as it was discussed many time already, it must be fixed.

What can not happen is that CHECK constraints get promoted to being
integrity constraints. The CHECK constraint is *designed* to validate
external data, not enforce integrity of data already stored. People should
not make the mistake of treating a CHECK constraint as though it were an
integrity constraint.

Consider a nullable column aColumn. You can do this:

alter table aTable
add constraint NoNulls
CHECK (aColumn is not null);

This should *not* be treated as a "poor man's NOT NULL constraint", since
its purpose is to screen input, not to alter the nullness state of existing
data.

Helen