Subject | Re: [Firebird-Architect] Metadata cache |
---|---|
Author | Alex Peshkov |
Post date | 2008-08-08T08:42:15Z |
On Friday 08 August 2008 12:13, Vlad Khorsun wrote:
EXECUTE STATEMENT in procedure.
why not store in it? The eaisiest way is to store in it some more instances
of metadata cache, which will merged into transaction's cache when undo log
is released.
Why metadata "must be" changed in the middle of such transaction?
conditions:) Why should DROP behaviour differ? Only because we can't
implement correct behaviour right now :))
system tables) is done.
accessible from >1 thread, should be protected with a kind of lock. RwLock
seems to be fine here.
> > On Thursday 07 August 2008 15:07, Adriano dos Santos Fernandes wrote:Yes, I've missed user savepoints. In the worst case we may also have DDL in
> >> Vlad Khorsun escreveu:
> >> > a) Introduce transaction-level and database-level metadata cache.
> >>
> >> Transaction-level but related to savepoints, to allow correct run of
> >> uncommitted metadata.
> >
> > Can you be a bit more specific here? I've always used to see suggested
> > approach as "look in transaction cache, when missing - go to global one".
> > Or may be it's related to correct unwind in cache when savepoint is
> > unwinded?
>
> I think, yes. Imagine :
>
> start transaction
> START SAVEPOINT S1;
> ALTER PROCEDURE P1...;
> ROLLBACK SAVEPOINT S1;
> ALTER PROCEDURE P2 // using P1. Here P2 must see old declaration of P1
>
> > May be it will be enough to populate transaction-level cache at the end
> > of DDL request, when no savepoints are needed any more at all?
>
> Hmm... maybe... i not sure it will handle correctly example above with
> user savepoint.
EXECUTE STATEMENT in procedure.
> The obvious (maybe not the best and not easyest) way is to account inTaking into an account that undo-log overflow problem seems to be solved now -
> undo-log old instances of metadata objects. Need more think about it.
why not store in it? The eaisiest way is to store in it some more instances
of metadata cache, which will merged into transaction's cache when undo log
is released.
> >> So metadata will always works in read-committed mode?If statement runs in RepeatableRead transaction, I'm not sure you are right.
> >
> > Yes. This is not good
>
> This is "must be" as for me. Any metadata change must be taken in
> account immediately by every existing connection, except of currently
> running statements.
Why metadata "must be" changed in the middle of such transaction?
> > but looks like quite possible compromise for FB3.Yes, certainly. It can read data from the table after DELETE FROM without
> > Implementing complete RepeatableRead with DDL is much more complicated
> > task (provided we do not want to have very often conflicts when doing
> > DDL).
>
> I see no value in such mode. Do you offer to long running transaction
> to work with already dropped table forever ? :)
conditions:) Why should DROP behaviour differ? Only because we can't
implement correct behaviour right now :))
> >> 2) Metadata integrity checks (see my reply to Alex) when old versionsTo be precise - when any modification of cache (including load data from
> >> are involved
> >
> > OK, I also do not see a good way to solve it without per-process (not
> > global) RW lock per cache (i.e. per database). Luckily, this lock will be
> > taken for 'write' only for a very small period of time - when
> > transaction-level cache is to be merged into global one.
system tables) is done.
> Currently, when FK constraint is changed, all processes do scanI just wanted to say that to guarantee integrity each data structure,
> corresponding relation's (load info from database) to pick up this
> constraint. I see no reason why this will not work in future.
accessible from >1 thread, should be protected with a kind of lock. RwLock
seems to be fine here.