Subject | Re: [Firebird-Architect] Metadata cache |
---|---|
Author | Alex Peshkov |
Post date | 2008-08-08T07:45:51Z |
On Thursday 07 August 2008 15:07, Adriano dos Santos Fernandes wrote:
approach as "look in transaction cache, when missing - go to global one". Or
may be it's related to correct unwind in cache when savepoint is unwinded?
May be it will be enough to populate transaction-level cache at the end of
DDL request, when no savepoints are needed any more at all?
in current transaction for the future?
Implementing complete RepeatableRead with DDL is much more complicated task
(provided we do not want to have very often conflicts when doing DDL).
committed are allowed to continue execution with old metadata (like they were
compiled). This is main difference for me.
it'. This means that object in metadata cache will not be released even if no
statement is using it. When object is removed from metadata cache, it will be
released as soon as last statement releases reference to it (or at once if
there are no such statements).
RW lock per cache (i.e. per database). Luckily, this lock will be taken
for 'write' only for a very small period of time - when transaction-level
cache is to be merged into global one.
> Vlad Khorsun escreveu:Can you be a bit more specific here? I've always used to see suggested
> > a) Introduce transaction-level and database-level metadata cache.
>
> Transaction-level but related to savepoints, to allow correct run of
> uncommitted metadata.
approach as "look in transaction cache, when missing - go to global one". Or
may be it's related to correct unwind in cache when savepoint is unwinded?
May be it will be enough to populate transaction-level cache at the end of
DDL request, when no savepoints are needed any more at all?
> > andI suggest to leave implementation details for the future.
> > b) Make any metadata object refcounted.
> >
> > Statements already have list of used resources (metadata objects).
> > Make them account object's references. The same for metadata cache.
>
> We need something as my proposed MetaRefHolder, stored in objects and
> passed to lookup functions.
> > When transaction created\altered\dropped object, it creates newWhy should it fail? What's wrong with disabling access to prepared statements
> > version and attached it into local cache. All prepared within this
> > transaction statements which used altered\dropped object marked as
> > obsolete.
>
> Why? In this case, I think the DDL command should fail if there are
> prepared statements in the same transaction using the object.
in current transaction for the future?
> > When transaction commits it marked all changed objects instances inYes. This is not good, but looks like quite possible compromise for FB3.
> > global cache as obsolete, removed its from global cache and replaced its
> > by local cache objects.
>
> So metadata will always works in read-committed mode?
Implementing complete RepeatableRead with DDL is much more complicated task
(provided we do not want to have very often conflicts when doing DDL).
> > Any statement before start of execution checked all used objects and,No - quite different. Requests already running when DDL transaction is
> > if found one obsolete, raised corresponding error. Such statement must be
> > prepared again and will use new objects. This behavior can be made less
> > restrictive if needed.
>
> This is not different from my proposal about FORCEd metadata changes.
committed are allowed to continue execution with old metadata (like they were
compiled). This is main difference for me.
> > Until global metadata cache holds referenceI've understoood this as 'metadata cache holds references to all objects in
> > to object it remains in memory despite of how many statement used it.
>
> I don't understood exactly what you mean.
it'. This means that object in metadata cache will not be released even if no
statement is using it. When object is removed from metadata cache, it will be
released as soon as last statement releases reference to it (or at once if
there are no such statements).
> > When new object added into global metadata cache, its signaled (viaI think yes, at least to make CS/SS behave in the same way.
> > special AST) to all other processes and they loaded new object version
> > from database into its global metadata cache, marked old instance as
> > obsolete, etc.
>
> Is it to support always-read-committed metadata on classic?
> > To prevent simultaneous editing of the same object we can introduceI suppose it will work.
> > corresponding lock and take it in EX mode before change of object.
>
> Why record locks on system tables doesn't work?
> What is not clear for me:How is it related with DDL?
> 1) Prepare statement in one transaction, transaction is committed and
> statement is run on others transaction
> 2) Metadata integrity checks (see my reply to Alex) when old versionsOK, I also do not see a good way to solve it without per-process (not global)
> are involved
RW lock per cache (i.e. per database). Luckily, this lock will be taken
for 'write' only for a very small period of time - when transaction-level
cache is to be merged into global one.