Subject Re: [Firebird-Architect] Metadata cache
Author Vlad Khorsun
> On Thursday 07 August 2008 15:07, Adriano dos Santos Fernandes wrote:
>> Vlad Khorsun escreveu:
>> > a) Introduce transaction-level and database-level metadata cache.
>>
>> Transaction-level but related to savepoints, to allow correct run of
>> uncommitted metadata.
>
> Can you be a bit more specific here? I've always used to see suggested
> approach as "look in transaction cache, when missing - go to global one". Or
> may be it's related to correct unwind in cache when savepoint is unwinded?

I think, yes. Imagine :

start transaction
START SAVEPOINT S1;
ALTER PROCEDURE P1...;
ROLLBACK SAVEPOINT S1;
ALTER PROCEDURE P2 // using P1. Here P2 must see old declaration of P1

> May be it will be enough to populate transaction-level cache at the end of
> DDL request, when no savepoints are needed any more at all?

Hmm... maybe... i not sure it will handle correctly example above with
user savepoint.

The obvious (maybe not the best and not easyest) way is to account in
undo-log old instances of metadata objects. Need more think about it.

>> > and
>> > b) Make any metadata object refcounted.
>> >
>> > Statements already have list of used resources (metadata objects).
>> > Make them account object's references. The same for metadata cache.
>>
>> We need something as my proposed MetaRefHolder, stored in objects and
>> passed to lookup functions.

Of course we need something ;)

> I suggest to leave implementation details for the future.

Agree ;)

>> > When transaction created\altered\dropped object, it creates new
>> > version and attached it into local cache. All prepared within this
>> > transaction statements which used altered\dropped object marked as
>> > obsolete.
>>
>> Why? In this case, I think the DDL command should fail if there are
>> prepared statements in the same transaction using the object.

I, personally, prefer to exetute DDL and get error on old prepared
statement execution. It seems to more useful, but i can be wrong, lets
disscuss it properly. Also we could look at another DBMS...

> Why should it fail? What's wrong with disabling access to prepared statements
> in current transaction for the future?

I think the same

>> > When transaction commits it marked all changed objects instances in
>> > global cache as obsolete, removed its from global cache and replaced its
>> > by local cache objects.
>>
>> So metadata will always works in read-committed mode?
>
> Yes. This is not good

This is "must be" as for me. Any metadata change must be taken in account
immediately by every existing connection, except of currently running statements.

> but looks like quite possible compromise for FB3.
> Implementing complete RepeatableRead with DDL is much more complicated task
> (provided we do not want to have very often conflicts when doing DDL).

I see no value in such mode. Do you offer to long running transaction to work
with already dropped table forever ? :)

>> > Any statement before start of execution checked all used objects and,
>> > if found one obsolete, raised corresponding error. Such statement must be
>> > prepared again and will use new objects. This behavior can be made less
>> > restrictive if needed.
>>
>> This is not different from my proposal about FORCEd metadata changes.
>
> No - quite different. Requests already running when DDL transaction is
> committed are allowed to continue execution with old metadata (like they were
> compiled). This is main difference for me.

Exactly. We may introduce FORCED metadata change with Adriano's behavior -
when all currently executed statements used altered objects will stop execution,
not continue.

>> > Until global metadata cache holds reference
>> > to object it remains in memory despite of how many statement used it.
>>
>> I don't understood exactly what you mean.
>
> I've understoood this as 'metadata cache holds references to all objects in
> it'. This means that object in metadata cache will not be released even if no
> statement is using it. When object is removed from metadata cache, it will be
> released as soon as last statement releases reference to it (or at once if
> there are no such statements).

Exactly, thanks for explanation, Alex ;)

>> > When new object added into global metadata cache, its signaled (via
>> > special AST) to all other processes and they loaded new object version
>> > from database into its global metadata cache, marked old instance as
>> > obsolete, etc.
>>
>> Is it to support always-read-committed metadata on classic?
>
> I think yes, at least to make CS/SS behave in the same way.

Yes

>> > To prevent simultaneous editing of the same object we can introduce
>> > corresponding lock and take it in EX mode before change of object.
>>
>> Why record locks on system tables doesn't work?
>
> I suppose it will work.

No problem than ;)

>> What is not clear for me:
>> 1) Prepare statement in one transaction, transaction is committed and
>> statement is run on others transaction
>
> How is it related with DDL?
>
>> 2) Metadata integrity checks (see my reply to Alex) when old versions
>> are involved
>
> OK, I also do not see a good way to solve it without per-process (not global)
> RW lock per cache (i.e. per database). Luckily, this lock will be taken
> for 'write' only for a very small period of time - when transaction-level
> cache is to be merged into global one.

Currently, when FK constraint is changed, all processes do scan corresponding
relation's (load info from database) to pick up this constraint. I see no reason
why this will not work in future.

Regards,
Vlad