Subject | Re: [Firebird-Architect] Metadata cache |
---|---|
Author | Vlad Khorsun |
Post date | 2008-08-08T20:19:47Z |
>>>>> Att1: create procedure P3 as begin execute procedure P1; endDoes record lock in system table do the job ?
>>>>> Att1: could execute P3
>>>>>
>>>>>
>>>> New instance of P3 will call old (current) instance of P1
>>>>
>>>>
>>>>
>>>>> Att2: alter P1 parameters
>>>>> Att2: could execute P1
>>>>>
>>>>>
>>>> New instance of P1 executed
>>>>
>>>>
>>>>
>>>>> Att1: commit
>>>>> Att2: commit
>>>>>
>>>>>
>>>> P3's internal request will be invalidated and engine would load and parse
>>>> P3's BLR on next run.
>>>>
>>> And parse will not work, cause it will not compile.
>>>
>>
>> Do you mean - it will not compile because of not compatible change of
>> P1 signature (input\output params) ?
>>
> Yes.
>
>> If first - how would your offer prevent not compatible change of P1 if P3
>> already existed and even was not loaded at time of alter of P1 ? This is up to
>> dependencies tracking, i'd said.
>>
> I think when altering P3, global exclusive lock should be acquired on it
> and global shared lock acquired on P1. P1 will not be alterable until P3Why ? What problem it will solve ?
> transaction is committed.
It seems you mixed dependency checking and metadata cache handling.
When dependency check start to work correctly (for procedures and its parameters)
nobody will be able to alter procedure by incompatible way. And this is not
dependent on metadata cache state.
> If we figure out the way this locks interacts with prepared statementsWould you figure this out for me, please ?
> invalidation and object ref. counters for execution, we'll have
> semantics between what me and you proposed.
> Versioned changes is not the important thing but the important things isI already describe when and how statement will be invalidated :
> how and when others receive errors about DDL changes and if others
> prevent or not DDL changes.
when - after commit of DDL tx and before start of statement execution
how - if engine have enough info it will try to recompile statement silently,
else (of if recompile failed) statement marked as invalid and raised error
on every execution attempt.
It seems logical and natural to me and adds minimum limitation for developers.
>>> It's bad if it's notGood, we have one thing less to implement ;)
>>> marked with RDB$VALID_BLR = FALSE. It's not good even if it's marked.
>>>
>>
>> Who prevent us to set RDB$VALID_BLR to FALSE if BLR parse failed ?
> It was introduced to be marked as false as soon the object was
> invalidated to not have surprises on execution, and marked as true when
> it's parseable.
Regards,
Vlad