Subject | Re: [Firebird-Architect] Metadata cache |
---|---|
Author | Adriano dos Santos Fernandes |
Post date | 2008-08-07T11:07:25Z |
Vlad Khorsun escreveu:
uncommitted metadata.
passed to lookup functions.
prepared statements in the same transaction using the object.
What is not clear for me:
1) Prepare statement in one transaction, transaction is committed and
statement is run on others transaction
2) Metadata integrity checks (see my reply to Alex) when old versions
are involved
Adriano
> Below is very raw idea of how to make our metadata cache versioned and allow users to safely alterTransaction-level but related to savepoints, to allow correct run of
> metadata in MT environment. Its not a complete proposal, just an idea.
>
> The key changes is :
>
> a) Introduce transaction-level and database-level metadata cache.
>
uncommitted metadata.
> andWe need something as my proposed MetaRefHolder, stored in objects and
> b) Make any metadata object refcounted.
>
> Statements already have list of used resources (metadata objects). Make them account object's
> references. The same for metadata cache.
>
passed to lookup functions.
> When transaction prepared statement it looks for objects in local cache first, than in global...
> cache. If object is missed, its loaded from database into global cache.
>
> When transaction created\altered\dropped object, it creates new version and attached it into localWhy? In this case, I think the DDL command should fail if there are
> cache. All prepared within this transaction statements which used altered\dropped object marked as
> obsolete.
>
prepared statements in the same transaction using the object.
> When transaction commits it marked all changed objects instances in global cache as obsolete,So metadata will always works in read-committed mode?
> removed its from global cache and replaced its by local cache objects.
> Note, old objects remains in...
> memory and any currently executed statement continues its execution.
>
> When transaction rolled back it just destroyed its local metadata cache.
>
> Any statement before start of execution checked all used objects and, if found one obsolete, raisedThis is not different from my proposal about FORCEd metadata changes.
> corresponding error. Such statement must be prepared again and will use new objects. This behavior
> can be made less restrictive if needed.
>
> When statement destroyed it released reference on each used metadata object. When there areOk.
> no more references left metadata object is destroyed too.
> Until global metadata cache holds referenceI don't understood exactly what you mean.
> to object it remains in memory despite of how many statement used it.
>
> When new object added into global metadata cache, its signaled (via special AST) to all otherIs it to support always-read-committed metadata on classic?
> processes and they loaded new object version from database into its global metadata cache, marked
> old instance as obsolete, etc.
> To prevent simultaneous editing of the same object we can introduceWhy record locks on system tables doesn't work?
> corresponding lock and take it in EX mode before change of object.
What is not clear for me:
1) Prepare statement in one transaction, transaction is committed and
statement is run on others transaction
2) Metadata integrity checks (see my reply to Alex) when old versions
are involved
Adriano