Subject Re: [Firebird-Architect] Metadata cache
Author Adriano dos Santos Fernandes
Vlad Khorsun escreveu:
>> In my proposal, manual or automatically invalidation of prepared
>> statement was the worst thing of it. When statement is allocated and
>> prepared, it appears it should run correctly if it doesn't have logic
>> problems. When execution fails it is generally problem of that specific
>> run. But invalidation cause each all subsequent run to fail and it's not
>> very logical to have to re-prepare the statement.
>>
>
> If engine have enough information it can re-prepare statement silently.
> This is the case of stored procedure\trigger\etc. But not the case of
> user statements. Speaking more general - if engine knows SQL text or BLR
> it can re-prepare statement transaparent to user application.
>
User (non-stored) statements could automatically be re-prepared only if
input and output messages not changes.

>
>
>> Applications may be caching prepared statements (like we do in EPP
>> files, but using DSQL) and this statements will fail forever.
>>
>
> This is up to application developer. He know what kind of statements
> is cached, when to change metadata, and how to handle corresponding error.
>
More below...

>
>> The problem I see is "unlucky" transactions getting partial changes from
>> system tables.
>>
>
> We must prevent such condition of course. And metadata object's locks may
> help us, isnt's is ?
>
May be.

>>>>> When new object added into global metadata cache, its signaled (via
>>>>> special AST) to all other processes and they loaded new object version
>>>>> from database into its global metadata cache, marked old instance as
>>>>> obsolete, etc.
>>>>>
>>>>>
>> Will them just be marked as "must-read" when necessary?
>>
>
> Explain, please
>
"its signaled (via special AST) to all other processes and they loaded
new object version from database" - does it means object is marked as
must-read and loaded from database only when necessary?

Or just when signal is received the object is loaded? This probably
cause performance problems with many Classic process.

>
>
>>>> What is not clear for me:
>>>> 1) Prepare statement in one transaction, transaction is committed and
>>>> statement is run on others transaction
>>>>
>>>>
>>> How is it related with DDL?
>>>
>>>
>> With relation of transaction prepared statements invalidation... AFAIU,
>> it's proposed that only statements prepared in the same transaction as
>> the DDL one will be invalidated if DDL touch the objects.
>>
>
> No. Statement prepared in DDL tx will be invalidated immediately after
> each DDL statement,
Only ones referencing the changed object?

> all other prepared statements *may be* invalidated after
> DDL tx commits.
>
And this will cause unpredicted problems on prepared statements cached
in the application. Why we need it with versioned metadata?

That's what I don't like. If they continue can be executed, very old
procedure versions may run forever. If they can't, you will end in
something very like I proposed with locks. (Locks solution may also be
adapted to allow execution continuing and locks acting only in prepare
and DDL changes)


>>>> 2) Metadata integrity checks (see my reply to Alex) when old versions
>>>> are involved
>>>>
>>>>
>>> OK, I also do not see a good way to solve it without per-process (not global)
>>> RW lock per cache (i.e. per database). Luckily, this lock will be taken
>>> for 'write' only for a very small period of time - when transaction-level
>>> cache is to be merged into global one.
>>>
>> Does Att2 commit will fail in this situation? How it will be done in CS?
>>
>> Att1: create procedure P3 as begin execute procedure P1; end
>> Att1: could execute P3
>>
>
> New instance of P3 will call old (current) instance of P1
>
>
>> Att2: alter P1 parameters
>> Att2: could execute P1
>>
>
> New instance of P1 executed
>
>
>> Att1: commit
>> Att2: commit
>>
>
> P3's internal request will be invalidated and engine would load and parse
> P3's BLR on next run.
And parse will not work, cause it will not compile. It's bad if it's not
marked with RDB$VALID_BLR = FALSE. It's not good even if it's marked.


Adriano