Subject Re: [Firebird-Architect] Metadata cache
Author Vlad Khorsun
> Vlad Khorsun escreveu:
>>> In my proposal, manual or automatically invalidation of prepared
>>> statement was the worst thing of it. When statement is allocated and
>>> prepared, it appears it should run correctly if it doesn't have logic
>>> problems. When execution fails it is generally problem of that specific
>>> run. But invalidation cause each all subsequent run to fail and it's not
>>> very logical to have to re-prepare the statement.
>>>
>>
>> If engine have enough information it can re-prepare statement silently.
>> This is the case of stored procedure\trigger\etc. But not the case of
>> user statements. Speaking more general - if engine knows SQL text or BLR
>> it can re-prepare statement transaparent to user application.
>>
> User (non-stored) statements could automatically be re-prepared only if
> input and output messages not changes.

This also may be taken into account. Lets raise error in such condition.

>>> Applications may be caching prepared statements (like we do in EPP
>>> files, but using DSQL) and this statements will fail forever.
>>>
>>
>> This is up to application developer. He know what kind of statements
>> is cached, when to change metadata, and how to handle corresponding error.
>>
> More below...
>
>>
>>> The problem I see is "unlucky" transactions getting partial changes from
>>> system tables.
>>>
>>
>> We must prevent such condition of course. And metadata object's locks may
>> help us, isnt's is ?
>>
> May be.
>
>>>>>> When new object added into global metadata cache, its signaled (via
>>>>>> special AST) to all other processes and they loaded new object version
>>>>>> from database into its global metadata cache, marked old instance as
>>>>>> obsolete, etc.
>>>>>>
>>>>>>
>>> Will them just be marked as "must-read" when necessary?
>>>
>>
>> Explain, please
>>
> "its signaled (via special AST) to all other processes and they loaded
> new object version from database" - does it means object is marked as
> must-read and loaded from database only when necessary?

Such objects will be removed from global metadata cache. When new
statement will be prepared it will not found object in global metadata cache
and will load it from disk.

> Or just when signal is received the object is loaded? This probably
> cause performance problems with many Classic process.

Say "no" to performance problems :)

>>>>> What is not clear for me:
>>>>> 1) Prepare statement in one transaction, transaction is committed and
>>>>> statement is run on others transaction
>>>>>
>>>>>
>>>> How is it related with DDL?
>>>>
>>>>
>>> With relation of transaction prepared statements invalidation... AFAIU,
>>> it's proposed that only statements prepared in the same transaction as
>>> the DDL one will be invalidated if DDL touch the objects.
>>>
>>
>> No. Statement prepared in DDL tx will be invalidated immediately after
>> each DDL statement,
> Only ones referencing the changed object?

Sure

>> all other prepared statements *may be* invalidated after
>> DDL tx commits.
>>
> And this will cause unpredicted problems on prepared statements cached
> in the application. Why we need it with versioned metadata?
>
> That's what I don't like. If they continue can be executed, very old
> procedure versions may run forever.

It will continue only *already started execution*. Next execution will
trigger check for object's versions and re-compilation or error will occurs.

> If they can't, you will end in
> something very like I proposed with locks. (Locks solution may also be
> adapted to allow execution continuing and locks acting only in prepare
> and DDL changes)

Perhaps we will combine both proposals and take best from them :)

>>>>> 2) Metadata integrity checks (see my reply to Alex) when old versions
>>>>> are involved
>>>>>
>>>>>
>>>> OK, I also do not see a good way to solve it without per-process (not global)
>>>> RW lock per cache (i.e. per database). Luckily, this lock will be taken
>>>> for 'write' only for a very small period of time - when transaction-level
>>>> cache is to be merged into global one.
>>>>
>>> Does Att2 commit will fail in this situation? How it will be done in CS?
>>>
>>> Att1: create procedure P3 as begin execute procedure P1; end
>>> Att1: could execute P3
>>>
>>
>> New instance of P3 will call old (current) instance of P1
>>
>>
>>> Att2: alter P1 parameters
>>> Att2: could execute P1
>>>
>>
>> New instance of P1 executed
>>
>>
>>> Att1: commit
>>> Att2: commit
>>>
>>
>> P3's internal request will be invalidated and engine would load and parse
>> P3's BLR on next run.
> And parse will not work, cause it will not compile.

Do you mean - it will not compile because of not compatible change of
P1 signature (input\output params) ? Or you know another reason ?

If first - how would your offer prevent not compatible change of P1 if P3
already existed and even was not loaded at time of alter of P1 ? This is up to
dependencies tracking, i'd said.

> It's bad if it's not
> marked with RDB$VALID_BLR = FALSE. It's not good even if it's marked.

Who prevent us to set RDB$VALID_BLR to FALSE if BLR parse failed ?

Regards,
Vlad