Subject | Re: [Firebird-Architect] Metadata cache |
---|---|
Author | Vlad Khorsun |
Post date | 2008-08-08T08:13:52Z |
> On Thursday 07 August 2008 15:07, Adriano dos Santos Fernandes wrote:I think, yes. Imagine :
>> Vlad Khorsun escreveu:
>> > a) Introduce transaction-level and database-level metadata cache.
>>
>> Transaction-level but related to savepoints, to allow correct run of
>> uncommitted metadata.
>
> Can you be a bit more specific here? I've always used to see suggested
> approach as "look in transaction cache, when missing - go to global one". Or
> may be it's related to correct unwind in cache when savepoint is unwinded?
start transaction
START SAVEPOINT S1;
ALTER PROCEDURE P1...;
ROLLBACK SAVEPOINT S1;
ALTER PROCEDURE P2 // using P1. Here P2 must see old declaration of P1
> May be it will be enough to populate transaction-level cache at the end ofHmm... maybe... i not sure it will handle correctly example above with
> DDL request, when no savepoints are needed any more at all?
user savepoint.
The obvious (maybe not the best and not easyest) way is to account in
undo-log old instances of metadata objects. Need more think about it.
>> > andOf course we need something ;)
>> > b) Make any metadata object refcounted.
>> >
>> > Statements already have list of used resources (metadata objects).
>> > Make them account object's references. The same for metadata cache.
>>
>> We need something as my proposed MetaRefHolder, stored in objects and
>> passed to lookup functions.
> I suggest to leave implementation details for the future.Agree ;)
>> > When transaction created\altered\dropped object, it creates newI, personally, prefer to exetute DDL and get error on old prepared
>> > version and attached it into local cache. All prepared within this
>> > transaction statements which used altered\dropped object marked as
>> > obsolete.
>>
>> Why? In this case, I think the DDL command should fail if there are
>> prepared statements in the same transaction using the object.
statement execution. It seems to more useful, but i can be wrong, lets
disscuss it properly. Also we could look at another DBMS...
> Why should it fail? What's wrong with disabling access to prepared statementsI think the same
> in current transaction for the future?
>> > When transaction commits it marked all changed objects instances inThis is "must be" as for me. Any metadata change must be taken in account
>> > global cache as obsolete, removed its from global cache and replaced its
>> > by local cache objects.
>>
>> So metadata will always works in read-committed mode?
>
> Yes. This is not good
immediately by every existing connection, except of currently running statements.
> but looks like quite possible compromise for FB3.I see no value in such mode. Do you offer to long running transaction to work
> Implementing complete RepeatableRead with DDL is much more complicated task
> (provided we do not want to have very often conflicts when doing DDL).
with already dropped table forever ? :)
>> > Any statement before start of execution checked all used objects and,Exactly. We may introduce FORCED metadata change with Adriano's behavior -
>> > if found one obsolete, raised corresponding error. Such statement must be
>> > prepared again and will use new objects. This behavior can be made less
>> > restrictive if needed.
>>
>> This is not different from my proposal about FORCEd metadata changes.
>
> No - quite different. Requests already running when DDL transaction is
> committed are allowed to continue execution with old metadata (like they were
> compiled). This is main difference for me.
when all currently executed statements used altered objects will stop execution,
not continue.
>> > Until global metadata cache holds referenceExactly, thanks for explanation, Alex ;)
>> > to object it remains in memory despite of how many statement used it.
>>
>> I don't understood exactly what you mean.
>
> I've understoood this as 'metadata cache holds references to all objects in
> it'. This means that object in metadata cache will not be released even if no
> statement is using it. When object is removed from metadata cache, it will be
> released as soon as last statement releases reference to it (or at once if
> there are no such statements).
>> > When new object added into global metadata cache, its signaled (viaYes
>> > special AST) to all other processes and they loaded new object version
>> > from database into its global metadata cache, marked old instance as
>> > obsolete, etc.
>>
>> Is it to support always-read-committed metadata on classic?
>
> I think yes, at least to make CS/SS behave in the same way.
>> > To prevent simultaneous editing of the same object we can introduceNo problem than ;)
>> > corresponding lock and take it in EX mode before change of object.
>>
>> Why record locks on system tables doesn't work?
>
> I suppose it will work.
>> What is not clear for me:Currently, when FK constraint is changed, all processes do scan corresponding
>> 1) Prepare statement in one transaction, transaction is committed and
>> statement is run on others transaction
>
> How is it related with DDL?
>
>> 2) Metadata integrity checks (see my reply to Alex) when old versions
>> are involved
>
> OK, I also do not see a good way to solve it without per-process (not global)
> RW lock per cache (i.e. per database). Luckily, this lock will be taken
> for 'write' only for a very small period of time - when transaction-level
> cache is to be merged into global one.
relation's (load info from database) to pick up this constraint. I see no reason
why this will not work in future.
Regards,
Vlad