Subject Re: [Firebird-Architect] Metadata cache
Author Adriano dos Santos Fernandes
Vlad Khorsun escreveu:
> Below is very raw idea of how to make our metadata cache versioned and allow users to safely alter
> metadata in MT environment. Its not a complete proposal, just an idea.
>
> The key changes is :
>
> a) Introduce transaction-level and database-level metadata cache.
>
Transaction-level but related to savepoints, to allow correct run of
uncommitted metadata.

> and
> b) Make any metadata object refcounted.
>
> Statements already have list of used resources (metadata objects). Make them account object's
> references. The same for metadata cache.
>
We need something as my proposed MetaRefHolder, stored in objects and
passed to lookup functions.

> When transaction prepared statement it looks for objects in local cache first, than in global
> cache. If object is missed, its loaded from database into global cache.
>
...
> When transaction created\altered\dropped object, it creates new version and attached it into local
> cache. All prepared within this transaction statements which used altered\dropped object marked as
> obsolete.
>
Why? In this case, I think the DDL command should fail if there are
prepared statements in the same transaction using the object.

> When transaction commits it marked all changed objects instances in global cache as obsolete,
> removed its from global cache and replaced its by local cache objects.
So metadata will always works in read-committed mode?

> Note, old objects remains in
> memory and any currently executed statement continues its execution.
>
> When transaction rolled back it just destroyed its local metadata cache.
>
...
> Any statement before start of execution checked all used objects and, if found one obsolete, raised
> corresponding error. Such statement must be prepared again and will use new objects. This behavior
> can be made less restrictive if needed.
>
This is not different from my proposal about FORCEd metadata changes.

> When statement destroyed it released reference on each used metadata object. When there are
> no more references left metadata object is destroyed too.
Ok.

> Until global metadata cache holds reference
> to object it remains in memory despite of how many statement used it.
>
I don't understood exactly what you mean.

> When new object added into global metadata cache, its signaled (via special AST) to all other
> processes and they loaded new object version from database into its global metadata cache, marked
> old instance as obsolete, etc.
Is it to support always-read-committed metadata on classic?

> To prevent simultaneous editing of the same object we can introduce
> corresponding lock and take it in EX mode before change of object.
Why record locks on system tables doesn't work?

What is not clear for me:
1) Prepare statement in one transaction, transaction is committed and
statement is run on others transaction
2) Metadata integrity checks (see my reply to Alex) when old versions
are involved


Adriano