Subject Re: Possible bug with CHECK constraint
Author Roman Rokytskyy
> Oh, I do agree with you about this one. But this was not the
> situation that your example illustrated. The check constraint
> worked exactly as defined.

See below.

> >And even more, after adding this constraint any subsequent insert
> > will fail.
> Yes - isn't that the idea of the CHECK constraint? It's not an
> integrity constraint. Its purpose is solely to validate incoming
> data *before* it is written anywhere.

Helen, we are talking about integrity contraint defined for a table,
not for a column/domain. If you say, that this is "as designed", then
you're right. But I'm still not comfortable with a table containing
data that do not satisfy the constraint.

> So are you saying you still don't understand why the check
> constraint didn't prevent you inserting the record with id=20?

It is perfectly clear why I was able to insert id=20 - constraint was
not there. I do not understand why I was able to commit transaction
where the constraint was added. This means only one thing:

- when I add UNIQUE constraint for a table, table is scanned for the
duplicates and operation fails when a duplicate is found (I suspect
that happens when unique index is created, not when constraint is added);

- when I add CHECK constraint for a table, constraint is simply added
and nothing fails (no index is created, nothing to fail).

From my point of view second case is a bug. My edition of C.J.Date
does not define what should happen when constraints are added to the
existing data, but I think Firebird should be consistent - either all
integrity constraints should be cheched before they are added or none
of them.

> What you observed was not a bug, it was correct behaviour for the
> check constraint as you defined it.
> 1. If you had reported the situation where you altered a table that
> already contained "invalid" data, and added that check constraint
> without getting an exception then, yes, that is a pitfall. Not
> everyone approves of making a distinction between a "pitfall" and a
> "bug". I happen to think that, if a feature is implemented in such
> a way that it is allowed to cause inconsistencies in existing data,
> then that's a bug.

Isn't it my case? Then please try following scripts:

-- Script 1
CREATE DATABASE 'test1.fdb' USER 'sysdba' PASSWORD 'masterkey';

CREATE TABLE test_master (
CHECK(COALESCE((SELECT max(id) FROM test_master), 0) < 10)


INSERT INTO test_master VALUES (1);
INSERT INTO test_master VALUES (20);


-- Script 2
CREATE DATABASE 'test2.fdb' USER 'sysdba' PASSWORD 'masterkey';

CREATE TABLE test_master (


INSERT INTO test_master VALUES (1);
INSERT INTO test_master VALUES (20);


ALTER TABLE test_master ADD CONSTRAINT test_max_value
CHECK((SELECT max(id) FROM test_master) < 10);


And try to explain why last COMMIT succeeds. :)

> 2. It is not possible to add a NOT NULL constraint to an existing
> column, just because there is no way to guess what should replace
> nulls in the existing data. That is correct, IMO, since it protects
> consistency; yet we frequently see people in this list complaining
< that it is a bug.

Shouldn't it corresponding ALTER TABLE ... ADD CONSTRAINT simply fail
when NULLs are detected?

> What can not happen is that CHECK constraints get promoted to being
> integrity constraints. The CHECK constraint is *designed* to
> validate external data, not enforce integrity of data already
> stored. People should not make the mistake of treating a CHECK
> constraint as though it were an integrity constraint.

Ok, in this case it is not a bug, but my mistake. In this case Script
2 is ok, but still Script 1 remains :)