Subject | Re: [Firebird-Architect] Metadata cache |
---|---|
Author | Vlad Khorsun |
Post date | 2008-08-08T13:44:57Z |
> Vlad Khorsun escreveu:This also may be taken into account. Lets raise error in such condition.
>>> In my proposal, manual or automatically invalidation of prepared
>>> statement was the worst thing of it. When statement is allocated and
>>> prepared, it appears it should run correctly if it doesn't have logic
>>> problems. When execution fails it is generally problem of that specific
>>> run. But invalidation cause each all subsequent run to fail and it's not
>>> very logical to have to re-prepare the statement.
>>>
>>
>> If engine have enough information it can re-prepare statement silently.
>> This is the case of stored procedure\trigger\etc. But not the case of
>> user statements. Speaking more general - if engine knows SQL text or BLR
>> it can re-prepare statement transaparent to user application.
>>
> User (non-stored) statements could automatically be re-prepared only if
> input and output messages not changes.
>>> Applications may be caching prepared statements (like we do in EPPSuch objects will be removed from global metadata cache. When new
>>> files, but using DSQL) and this statements will fail forever.
>>>
>>
>> This is up to application developer. He know what kind of statements
>> is cached, when to change metadata, and how to handle corresponding error.
>>
> More below...
>
>>
>>> The problem I see is "unlucky" transactions getting partial changes from
>>> system tables.
>>>
>>
>> We must prevent such condition of course. And metadata object's locks may
>> help us, isnt's is ?
>>
> May be.
>
>>>>>> When new object added into global metadata cache, its signaled (via
>>>>>> special AST) to all other processes and they loaded new object version
>>>>>> from database into its global metadata cache, marked old instance as
>>>>>> obsolete, etc.
>>>>>>
>>>>>>
>>> Will them just be marked as "must-read" when necessary?
>>>
>>
>> Explain, please
>>
> "its signaled (via special AST) to all other processes and they loaded
> new object version from database" - does it means object is marked as
> must-read and loaded from database only when necessary?
statement will be prepared it will not found object in global metadata cache
and will load it from disk.
> Or just when signal is received the object is loaded? This probablySay "no" to performance problems :)
> cause performance problems with many Classic process.
>>>>> What is not clear for me:Sure
>>>>> 1) Prepare statement in one transaction, transaction is committed and
>>>>> statement is run on others transaction
>>>>>
>>>>>
>>>> How is it related with DDL?
>>>>
>>>>
>>> With relation of transaction prepared statements invalidation... AFAIU,
>>> it's proposed that only statements prepared in the same transaction as
>>> the DDL one will be invalidated if DDL touch the objects.
>>>
>>
>> No. Statement prepared in DDL tx will be invalidated immediately after
>> each DDL statement,
> Only ones referencing the changed object?
>> all other prepared statements *may be* invalidated afterIt will continue only *already started execution*. Next execution will
>> DDL tx commits.
>>
> And this will cause unpredicted problems on prepared statements cached
> in the application. Why we need it with versioned metadata?
>
> That's what I don't like. If they continue can be executed, very old
> procedure versions may run forever.
trigger check for object's versions and re-compilation or error will occurs.
> If they can't, you will end inPerhaps we will combine both proposals and take best from them :)
> something very like I proposed with locks. (Locks solution may also be
> adapted to allow execution continuing and locks acting only in prepare
> and DDL changes)
>>>>> 2) Metadata integrity checks (see my reply to Alex) when old versionsDo you mean - it will not compile because of not compatible change of
>>>>> are involved
>>>>>
>>>>>
>>>> OK, I also do not see a good way to solve it without per-process (not global)
>>>> RW lock per cache (i.e. per database). Luckily, this lock will be taken
>>>> for 'write' only for a very small period of time - when transaction-level
>>>> cache is to be merged into global one.
>>>>
>>> Does Att2 commit will fail in this situation? How it will be done in CS?
>>>
>>> Att1: create procedure P3 as begin execute procedure P1; end
>>> Att1: could execute P3
>>>
>>
>> New instance of P3 will call old (current) instance of P1
>>
>>
>>> Att2: alter P1 parameters
>>> Att2: could execute P1
>>>
>>
>> New instance of P1 executed
>>
>>
>>> Att1: commit
>>> Att2: commit
>>>
>>
>> P3's internal request will be invalidated and engine would load and parse
>> P3's BLR on next run.
> And parse will not work, cause it will not compile.
P1 signature (input\output params) ? Or you know another reason ?
If first - how would your offer prevent not compatible change of P1 if P3
already existed and even was not loaded at time of alter of P1 ? This is up to
dependencies tracking, i'd said.
> It's bad if it's notWho prevent us to set RDB$VALID_BLR to FALSE if BLR parse failed ?
> marked with RDB$VALID_BLR = FALSE. It's not good even if it's marked.
Regards,
Vlad