Subject | RE: [firebird-support] RE: Duplicate primary keys v1.5.1 |
---|---|
Author | Rick DeBay |
Post date | 2004-10-25T16:56:07Z |
> my guess is that you created the table in the same transaction as youran the procedure
Nope. I created the table using DataBase Workbench, and then I used
JBoss to bulk load the data.
I never modify the schema through the application server. I don't even
let my EJBs do that under the covers.
-----Original Message-----
From: Helen Borrie [mailto:helebor@...]
Sent: Friday, October 22, 2004 9:58 PM
To: firebird-support@yahoogroups.com
Subject: Re: [firebird-support] RE: Duplicate primary keys v1.5.1
At 02:23 PM 22/10/2004 -0400, you wrote:
>I used a stored procedure to import a lot of data, and got quite ait
>surprise when I viewed it. Every imported record was in the table,
>including duplicates! Looking at the statistics for the primary key,
>shows that it wasn't used. I wasn't aware that a PK could be turnedA PK can't be "turned off" except by dropping it. You can
>off, or how to turn it back on now that it's off. Gfix -v -n didn't
>return any messages.
deactivate/activate indexes using ALTER INDEX xxx INACTIVE |
ACTIVE. However, you can't deactivate the indexes that the system
creates
to enforce constraints.
Try this:
create table testpk
(id integer not null,
data varchar(10),
constraint pk_id primary key (id)
using asc index pk_testpk);
commit;
alter index pk_testpk inactive;
ISC ERROR CODE:335544351
ISC ERROR MESSAGE:
unsuccessful metadata update
MODIFY RDB$INDICESS failed
action cancelled by trigger (3) to preserve data integrity
Cannot deactivate primary index
(must send that isc message through to Dmitry. It seems to have
acquired
an extra "S"...)
As for your "problem", my guess is that you created the table in the
same
transaction as you ran the procedure, without committing the DDL.
Because
the transaction could see the entries in RDB$RELATIONS, RDB$FIELDS and
RDB$RELATION_FIELDS, it allowed the inserts into the uncommitted
tables; but, at that point, the system couldn't apply the constraint
and
create the index, because the metadata for the table were uncommitted.
It would have excepted on the duplicates when the transaction was
committed, so apparently you never did commit the transaction until
after
you deleted the rows. In that case, when the transaction was finally
committed, the table was empty, the constraint was finally created and
there was nothing amiss. There was probably no need to drop and
recreate
the constraint.
btw, it's a little known fact that gstat works by reading directly from
the
database file, not by connecting to the database. Hence, its reports
are
not in any transaction context - it can't distinguish between committed
and
uncommitted data, i.e. it's a dinkum "dirty read". The stats you showed
are quite consistent with what I'm guessing you did.
Just in case you've missed it in the rush - you're not meant to create
metadata and add data to the structures in the same transaction. DDL
and
DML should *always* be in separate transactions. The next ODS is going
to
enforce that, I believe. Widespread abuse of the EXECUTE STATEMENT
syntax
to pass DDL through stored procedures has augmented the risks of people
doing things they didn't oughta. (The rule about not using DDL in
procedure modules is actually there by design.)
./hb
Yahoo! Groups Links