Subject | Metadata cache |
---|---|
Author | Vlad Khorsun |
Post date | 2008-08-06T21:33:26Z |
Below is very raw idea of how to make our metadata cache versioned and allow users to safely alter
metadata in MT environment. Its not a complete proposal, just an idea.
The key changes is :
a) Introduce transaction-level and database-level metadata cache.
and
b) Make any metadata object refcounted.
Statements already have list of used resources (metadata objects). Make them account object's
references. The same for metadata cache.
When transaction prepared statement it looks for objects in local cache first, than in global
cache. If object is missed, its loaded from database into global cache.
When transaction created\altered\dropped object, it creates new version and attached it into local
cache. All prepared within this transaction statements which used altered\dropped object marked as
obsolete.
When transaction commits it marked all changed objects instances in global cache as obsolete,
removed its from global cache and replaced its by local cache objects. Note, old objects remains in
memory and any currently executed statement continues its execution.
When transaction rolled back it just destroyed its local metadata cache.
Any statement before start of execution checked all used objects and, if found one obsolete, raised
corresponding error. Such statement must be prepared again and will use new objects. This behavior
can be made less restrictive if needed.
When statement destroyed it released reference on each used metadata object. When there are
no more references left metadata object is destroyed too. Until global metadata cache holds reference
to object it remains in memory despite of how many statement used it.
When new object added into global metadata cache, its signaled (via special AST) to all other
processes and they loaded new object version from database into its global metadata cache, marked
old instance as obsolete, etc. To prevent simultaneous editing of the same object we can introduce
corresponding lock and take it in EX mode before change of object.
What do you think ?
Regards,
Vlad
metadata in MT environment. Its not a complete proposal, just an idea.
The key changes is :
a) Introduce transaction-level and database-level metadata cache.
and
b) Make any metadata object refcounted.
Statements already have list of used resources (metadata objects). Make them account object's
references. The same for metadata cache.
When transaction prepared statement it looks for objects in local cache first, than in global
cache. If object is missed, its loaded from database into global cache.
When transaction created\altered\dropped object, it creates new version and attached it into local
cache. All prepared within this transaction statements which used altered\dropped object marked as
obsolete.
When transaction commits it marked all changed objects instances in global cache as obsolete,
removed its from global cache and replaced its by local cache objects. Note, old objects remains in
memory and any currently executed statement continues its execution.
When transaction rolled back it just destroyed its local metadata cache.
Any statement before start of execution checked all used objects and, if found one obsolete, raised
corresponding error. Such statement must be prepared again and will use new objects. This behavior
can be made less restrictive if needed.
When statement destroyed it released reference on each used metadata object. When there are
no more references left metadata object is destroyed too. Until global metadata cache holds reference
to object it remains in memory despite of how many statement used it.
When new object added into global metadata cache, its signaled (via special AST) to all other
processes and they loaded new object version from database into its global metadata cache, marked
old instance as obsolete, etc. To prevent simultaneous editing of the same object we can introduce
corresponding lock and take it in EX mode before change of object.
What do you think ?
Regards,
Vlad