Subject | Re: Possible bug with CHECK constraint |
---|---|
Author | Roman Rokytskyy |
Post date | 2004-10-03T11:56:56Z |
> Oh, I do agree with you about this one. But this was not theSee below.
> situation that your example illustrated. The check constraint
> worked exactly as defined.
> >And even more, after adding this constraint any subsequent insertHelen, we are talking about integrity contraint defined for a table,
> > will fail.
>
> Yes - isn't that the idea of the CHECK constraint? It's not an
> integrity constraint. Its purpose is solely to validate incoming
> data *before* it is written anywhere.
not for a column/domain. If you say, that this is "as designed", then
you're right. But I'm still not comfortable with a table containing
data that do not satisfy the constraint.
> So are you saying you still don't understand why the checkIt is perfectly clear why I was able to insert id=20 - constraint was
> constraint didn't prevent you inserting the record with id=20?
not there. I do not understand why I was able to commit transaction
where the constraint was added. This means only one thing:
- when I add UNIQUE constraint for a table, table is scanned for the
duplicates and operation fails when a duplicate is found (I suspect
that happens when unique index is created, not when constraint is added);
- when I add CHECK constraint for a table, constraint is simply added
and nothing fails (no index is created, nothing to fail).
From my point of view second case is a bug. My edition of C.J.Date
does not define what should happen when constraints are added to the
existing data, but I think Firebird should be consistent - either all
integrity constraints should be cheched before they are added or none
of them.
> What you observed was not a bug, it was correct behaviour for theIsn't it my case? Then please try following scripts:
> check constraint as you defined it.
>
> 1. If you had reported the situation where you altered a table that
> already contained "invalid" data, and added that check constraint
> without getting an exception then, yes, that is a pitfall. Not
> everyone approves of making a distinction between a "pitfall" and a
> "bug". I happen to think that, if a feature is implemented in such
> a way that it is allowed to cause inconsistencies in existing data,
> then that's a bug.
-- Script 1
CREATE DATABASE 'test1.fdb' USER 'sysdba' PASSWORD 'masterkey';
CREATE TABLE test_master (
id INTEGER NOT NULL PRIMARY KEY
CHECK(COALESCE((SELECT max(id) FROM test_master), 0) < 10)
);
COMMIT;
INSERT INTO test_master VALUES (1);
INSERT INTO test_master VALUES (20);
COMMIT;
-- Script 2
CREATE DATABASE 'test2.fdb' USER 'sysdba' PASSWORD 'masterkey';
CREATE TABLE test_master (
id INTEGER NOT NULL PRIMARY KEY
);
COMMIT;
INSERT INTO test_master VALUES (1);
INSERT INTO test_master VALUES (20);
COMMIT;
ALTER TABLE test_master ADD CONSTRAINT test_max_value
CHECK((SELECT max(id) FROM test_master) < 10);
COMMIT;
And try to explain why last COMMIT succeeds. :)
> 2. It is not possible to add a NOT NULL constraint to an existing< that it is a bug.
> column, just because there is no way to guess what should replace
> nulls in the existing data. That is correct, IMO, since it protects
> consistency; yet we frequently see people in this list complaining
Shouldn't it corresponding ALTER TABLE ... ADD CONSTRAINT simply fail
when NULLs are detected?
> What can not happen is that CHECK constraints get promoted to beingOk, in this case it is not a bug, but my mistake. In this case Script
> integrity constraints. The CHECK constraint is *designed* to
> validate external data, not enforce integrity of data already
> stored. People should not make the mistake of treating a CHECK
> constraint as though it were an integrity constraint.
2 is ok, but still Script 1 remains :)
Roman