Subject | Re: [Firebird-Architect] Metadata cache |
---|---|
Author | Adriano dos Santos Fernandes |
Post date | 2008-08-08T11:20:29Z |
Alex Peshkov escreveu:
statement was the worst thing of it. When statement is allocated and
prepared, it appears it should run correctly if it doesn't have logic
problems. When execution fails it is generally problem of that specific
run. But invalidation cause each all subsequent run to fail and it's not
very logical to have to re-prepare the statement.
Applications may be caching prepared statements (like we do in EPP
files, but using DSQL) and this statements will fail forever.
system tables.
it's proposed that only statements prepared in the same transaction as
the DDL one will be invalidated if DDL touch the objects.
So statements prepared in another transaction will run with old object
definitions but own transaction statements will not? Why they should be
invalidated then?
Att1: create procedure P3 as begin execute procedure P1; end
Att1: could execute P3
Att2: alter P1 parameters
Att2: could execute P1
Att1: commit
Att2: commit
Adriano
>>> When transaction created\altered\dropped object, it creates newIn my proposal, manual or automatically invalidation of prepared
>>> version and attached it into local cache. All prepared within this
>>> transaction statements which used altered\dropped object marked as
>>> obsolete.
>>>
>> Why? In this case, I think the DDL command should fail if there are
>> prepared statements in the same transaction using the object.
>>
>
> Why should it fail? What's wrong with disabling access to prepared statements
> in current transaction for the future?
>
statement was the worst thing of it. When statement is allocated and
prepared, it appears it should run correctly if it doesn't have logic
problems. When execution fails it is generally problem of that specific
run. But invalidation cause each all subsequent run to fail and it's not
very logical to have to re-prepare the statement.
Applications may be caching prepared statements (like we do in EPP
files, but using DSQL) and this statements will fail forever.
>>> When transaction commits it marked all changed objects instances inThe problem I see is "unlucky" transactions getting partial changes from
>>> global cache as obsolete, removed its from global cache and replaced its
>>> by local cache objects.
>>>
>> So metadata will always works in read-committed mode?
>>
>
> Yes. This is not good, but looks like quite possible compromise for FB3.
>
system tables.
>>> When new object added into global metadata cache, its signaled (viaWill them just be marked as "must-read" when necessary?
>>> special AST) to all other processes and they loaded new object version
>>> from database into its global metadata cache, marked old instance as
>>> obsolete, etc.
>>>
>> What is not clear for me:With relation of transaction prepared statements invalidation... AFAIU,
>> 1) Prepare statement in one transaction, transaction is committed and
>> statement is run on others transaction
>>
>
> How is it related with DDL?
>
it's proposed that only statements prepared in the same transaction as
the DDL one will be invalidated if DDL touch the objects.
So statements prepared in another transaction will run with old object
definitions but own transaction statements will not? Why they should be
invalidated then?
>Does Att2 commit will fail in this situation? How it will be done in CS?
>> 2) Metadata integrity checks (see my reply to Alex) when old versions
>> are involved
>>
>
> OK, I also do not see a good way to solve it without per-process (not global)
> RW lock per cache (i.e. per database). Luckily, this lock will be taken
> for 'write' only for a very small period of time - when transaction-level
> cache is to be merged into global one.
Att1: create procedure P3 as begin execute procedure P1; end
Att1: could execute P3
Att2: alter P1 parameters
Att2: could execute P1
Att1: commit
Att2: commit
Adriano