Subject Re: Re[2]: [ib-support] Unsuccessful metatdata update: Index is in use
Author Claudio Valderrama C.
"Dieter Tremel" <tremel@...> wrote in message
news:1166221866.20020325091517@......
>
> thank You for your solution, it helped me finishing the script.
> Unfortunately I understood not perfectly why the index remains locked
> even if the transaction is committed. Is this a bug or necessary?

It's necessary for now.
create table master m(m int not null primary key);
=> m got an underlying index named rdb$primaryNN
create table detail d
(a int references master(m) on delete cascade,
b int references master(m) on update cascade);
=> d got two underlying indices named rdb$foreignNN.
=> m got two special triggers, one that implements the delete effect and one
that implement the update effect, both applies to "detail", since you want
cascade.
You can see internally in the system tables that those triggers have a
dependency on the FK field of detail. This is obvious because one trigger
does
delete from detail where a = old.m;
and the other does
update detail set b = new.m where b = old.m;

But those fields are indexed as I showed above. When you drop the first
contraint, the engine has to rebuild metadata for "detail". However, since
CASCADE implementation caused a change in master (two triggers behind
scenes), the engine has to modify master to drop the index that implements
the delete-cascade. After the relation is modified, it's reloaded. Reload
means scan all dependencies and triggers it has. Hence, the reloading
process picks the second trigger (the trigger that implements the
update-cascade whose associated FK we don't drop yet) and compiles it. Since
that trigger uses detail.b, the underlying index in detail.b is taken as a
needed resource.

When you try to drop the second constraint, the engine again has to modify
both relations because there's a cascade operation defined. When it goes to
modify detail to drop the index in b, that index can't be dropped because
the trigger in master has it taken (the trigger has not been unloaded).

Maybe, maybe by changing the order of operations (first trying to drop the
trigger in master so the index in detail is released) and only then dropping
the index in detail might work, but:
- it doesn't necessarily solve the problem in concurrent usage
- the trigger might have dependencies on other tables, causing other
entities to be loaded that reference the same indexed field and hence,
locking the index.


> I think I understand the mechanism of trigger an
> commonly used PK master index, but from my imagination it would be
> more a problem to do it in one transaction that in two. But I do not
> know enough about the internal principles of IB/FB.

When you use normal DDL autocommit, each DDL statement happens in its own
transaction. Hence, you see a conflict. However, inside the same
transaction, it first gets a lock on the index (for the trigger) and in the
second time, it gets again a lock on the same index (to be able to drop it).
Since it's the same lock (created by the same txn), it's granted again. But
when another txn tries to lock the index to drop it (so no request will try
to use a vanished index), the exclusive lock cannot be granted to another
transaction. Therefore, you see the infamous object INDEX in use error.

C.
--
Claudio Valderrama C. - http://www.cvalde.com - http://www.firebirdSql.org
Independent developer
Owner of the Interbase® WebRing