Subject Re: [Firebird-Architect] Metadata cache
Author Alex Peshkov
On Friday 08 August 2008 12:13, Vlad Khorsun wrote:
> > On Thursday 07 August 2008 15:07, Adriano dos Santos Fernandes wrote:
> >> Vlad Khorsun escreveu:
> >> > a) Introduce transaction-level and database-level metadata cache.
> >>
> >> Transaction-level but related to savepoints, to allow correct run of
> >> uncommitted metadata.
> >
> > Can you be a bit more specific here? I've always used to see suggested
> > approach as "look in transaction cache, when missing - go to global one".
> > Or may be it's related to correct unwind in cache when savepoint is
> > unwinded?
>
> I think, yes. Imagine :
>
> start transaction
> START SAVEPOINT S1;
> ALTER PROCEDURE P1...;
> ROLLBACK SAVEPOINT S1;
> ALTER PROCEDURE P2 // using P1. Here P2 must see old declaration of P1
>
> > May be it will be enough to populate transaction-level cache at the end
> > of DDL request, when no savepoints are needed any more at all?
>
> Hmm... maybe... i not sure it will handle correctly example above with
> user savepoint.

Yes, I've missed user savepoints. In the worst case we may also have DDL in
EXECUTE STATEMENT in procedure.

> The obvious (maybe not the best and not easyest) way is to account in
> undo-log old instances of metadata objects. Need more think about it.

Taking into an account that undo-log overflow problem seems to be solved now -
why not store in it? The eaisiest way is to store in it some more instances
of metadata cache, which will merged into transaction's cache when undo log
is released.

> >> So metadata will always works in read-committed mode?
> >
> > Yes. This is not good
>
> This is "must be" as for me. Any metadata change must be taken in
> account immediately by every existing connection, except of currently
> running statements.

If statement runs in RepeatableRead transaction, I'm not sure you are right.
Why metadata "must be" changed in the middle of such transaction?

> > but looks like quite possible compromise for FB3.
> > Implementing complete RepeatableRead with DDL is much more complicated
> > task (provided we do not want to have very often conflicts when doing
> > DDL).
>
> I see no value in such mode. Do you offer to long running transaction
> to work with already dropped table forever ? :)

Yes, certainly. It can read data from the table after DELETE FROM without
conditions:) Why should DROP behaviour differ? Only because we can't
implement correct behaviour right now :))

> >> 2) Metadata integrity checks (see my reply to Alex) when old versions
> >> are involved
> >
> > OK, I also do not see a good way to solve it without per-process (not
> > global) RW lock per cache (i.e. per database). Luckily, this lock will be
> > taken for 'write' only for a very small period of time - when
> > transaction-level cache is to be merged into global one.

To be precise - when any modification of cache (including load data from
system tables) is done.

> Currently, when FK constraint is changed, all processes do scan
> corresponding relation's (load info from database) to pick up this
> constraint. I see no reason why this will not work in future.

I just wanted to say that to guarantee integrity each data structure,
accessible from >1 thread, should be protected with a kind of lock. RwLock
seems to be fine here.