Subject | RE: [firebird-support] insert bug in SQL execution??? |
---|---|
Author | Helen Borrie |
Post date | 2004-08-16T00:18:27Z |
At 12:53 PM 15/08/2004 -0400, you wrote:
prevented a transaction from seeing its own uncommitted work.
What does "PostgreSQL works correctly" really indicate here? Relational
rules don't encourage exact duplicate records: what can a query engine do
with them? From the point of view of relational integrity, PG breaks the
rule by permitting it and Fb/IB honours it by allowing it to recurse until
it inevitably fails. In PG you end up with duplicate rows. In Fb/IB you
end up with no duplicate rows and an exception.
On a keyed table, the constraint would prevent you from duplicating any row
using this statement.
Where there's a need to duplicate data, there are techniques to enable you
to do it without jeopardising the integrity of the table. For example,
it's not a problem where you have an auto-incrementing primary key on the
table and use an appropriate WHERE clause to limit the rows selected.
INSERT INTO T1 (<list of columns excluding the primary key>)
SELECT <list of columns excluding the primary key> FROM T1
WHERE ThePrimaryKey <= <maximum key value you want duplicates for>
There are other techniques for doing INSERT...SELECT duplication
safely. Depending on the requirements, you might need to apply protective
conditions to the transaction as well.
Erti, if you're seriously looking for a solution to an actual requirement,
ask again and provide a contextual description of what you want to do.
/heLen
> > the statementAt 12:53 PM 15/08/2004 -0400, Sean Leyne wrote:
> > INSERT INTO T1 SELECT * FROM T1 (duplicating T1)
> > causes (in Firebird 1.5.1, maybe in IB too) closed loop.
> >
> > PostgreSQL works correctly!
>Yes, this an old/long standing bug with FB and IB (all versions)....while others would say it was a crippling bug if database rules
prevented a transaction from seeing its own uncommitted work.
What does "PostgreSQL works correctly" really indicate here? Relational
rules don't encourage exact duplicate records: what can a query engine do
with them? From the point of view of relational integrity, PG breaks the
rule by permitting it and Fb/IB honours it by allowing it to recurse until
it inevitably fails. In PG you end up with duplicate rows. In Fb/IB you
end up with no duplicate rows and an exception.
On a keyed table, the constraint would prevent you from duplicating any row
using this statement.
Where there's a need to duplicate data, there are techniques to enable you
to do it without jeopardising the integrity of the table. For example,
it's not a problem where you have an auto-incrementing primary key on the
table and use an appropriate WHERE clause to limit the rows selected.
INSERT INTO T1 (<list of columns excluding the primary key>)
SELECT <list of columns excluding the primary key> FROM T1
WHERE ThePrimaryKey <= <maximum key value you want duplicates for>
There are other techniques for doing INSERT...SELECT duplication
safely. Depending on the requirements, you might need to apply protective
conditions to the transaction as well.
Erti, if you're seriously looking for a solution to an actual requirement,
ask again and provide a contextual description of what you want to do.
/heLen