Subject Re: [Firebird-Architect] Metadata locking policy and caching
Author Adriano dos Santos Fernandes
Alex Peshkov escreveu:
> Imagine SQL batch, doing some metadata changes. Will you suggest people to
> analyze MON$ tables before actually changing something?
>
They don't need to analyze. I propose something to be able for one see
all object locks acquired per statement. It could be used directly in a
DELETE FROM MON$...

If user didn't care, the DDL may fail if others are using the objects.
Or we may have your (3) solution.

>> And then I need to insist on isc_ddl deprecation (remove it in V3.0).
>> Changing DDL logic to DdlNodes (making parser OO-friendly and DDL
>> changes much more simple) is not very difficult, but construct parser
>> nodes from DYN codes seems very difficult task.
>>
>
> And moreover - very useless task.
>
Sure.

>
>> Maintain parallel DYN
>> handling is also far from ideal.
>>
>> To not make the (few) users doing DDL changes with GPRE very unhappy, we
>> may introduce EXECUTE STATEMENT in GPRE. What you think?
>>
>
> Good solution, and should work. But what to do with gdef? It's still used by
> TCS (very active), and I'm afraid can be used by some people.
>
I think we should remove it and rewrite our tests. We're already not
supporting old OSs, why we need to continue supporting this dinosaur if
it enters in our way? Users will continue be able to use old utilities
without upgrade. :-)

> Not agreed. Full DDL transactions support and ability to modify metadata on
> the fly are both desired.
There is another complication factor here. Statements prepared in one
transaction may run in another one. How metadata versions could work in
this situation?

> Engine should better track versions itself instead
> of suggesting people to do it manually using selects from MON$ system tables.
>
>
>> and not work in some conditions.
>>
>> When you have table changes, versions doesn't apply. Why one wants to
>> update an already dropped column?
>>
>
> If transaction was started before column was dropped, it must see it.
>
Ok. For tables, lock rules may be less restrictive.

>
>> When one founds a bug in a procedure, he fixes it and the fix may need
>> data changes (caused by previously bugged code). Maintaining others
>> running obsolete code may be a very bad thing.
>>
>
> Have you ever seen a bug report, that use of old copy of procedure (in CS, for
> example) caused problems when new one existed? I do not.
I introduced DSQL cache lock to fix a problem. :-)

We generally see users complaining about old procedure and triggers
versions running even in SS. We point them that this is documented in
IB6 docs.

> Generally, we have
> the following opportunities:
> 1. Let old transactions continue using old versions.
> 2. Do not let modify any object, used in any other running request
> (transaction, prepared statement, etc.).
> 3. Kill and unwind any request, using an object we want to modify (BTW, ideal
> when related data changes are needed).
> Certainly, both 1 and 3 are easier to implement, and they provide somewhat
> stricter logic. But I'm sure users will not like any of it.
3 may be implemented combined with 2, using FORCEd DDL changes. For
example, "DROP PROCEDURE sp1 FORCE" (or some other syntax) will kill
everyone using it. FORCE may also be a transaction parameter.


Adriano