Subject | Re: [Firebird-Architect] Metadata cache |
---|---|
Author | Vlad Khorsun |
Post date | 2008-08-08T08:41:56Z |
>> What is not clear for me:I hope we can distinguish metadata changes which alters object structure from metadata
>> 1) Prepare statement in one transaction, transaction is committed and
>> statement is run on others transaction
>> 2) Metadata integrity checks (see my reply to Alex) when old versions
>> are involved
>
> I know this will upset many users/developers but...
>
> In both of these cases, the prepared statements are invalid for new
> transaction context and any use of the prepared statements should raise
> an error.
changes which touched only some rules (DRI\check constraints). In later case it is enough
to pick up new rules as soon as DDL transaction commits and don't invalidate prepared
statements.
> (The error should not be when the new transaction is started but whenExactly.
> the invalid statement is used in the new transaction)
> I think that it is entirely logical for a statement and transaction toYes.
> be expected to be completely valid from a database schema and data
> perspective.
> Therefore, if the database schema changes, then all prepared statementsYes.
> which refer to invalid objects should be marked as invalid, as of this
> transaction!
> The key issue is that the detection logic should appropriately track theWe with Alex discussed it yesterday (privately) and i think we may offer lightweight
> dependencies and the "real" changes. False collisions are not
> acceptable.
solution for this issue :
1. Introduce database-global atomic counter of DDL transactions. It may be stored in header
page or in shared memory, lets decide later.
2. Every DDL transaction increments this counter at the very end stage of its commit.
3. Every prepared statement marked by current value of this DDL_TX counter at preparation time.
4. Every statement before every execution compared its private DDL_TX mark with current
value of global DDL_TX. If private mark is less then global, statement must check all used
metadata objects for obsolete flag. If such object is found, statement marked as obsolete
and must be re-prepared (if possible) or it will raise error on execution. If such object was not
found statement marked with new value of DDL_TX and started its execution.
I think its easy enough and will not add not necessary overhead.
> As both Alex and Paul have stated your suggested approach of restrictingI'd said not "transactions, as with data, should be constrained" but "statements", i.e. constraints
> schema changes is much too limiting.
>
> Schema changes on a live database must be allowed, without regard the
> active transactions. New transactions, as with data, should be
> constrained by the new schema.
is independent from transactions and must be taken in account immediately by all statements despite
of transaction in which it is running or was prepared.
My point is that constraints usually does not affect statement execution plan and impure area, so
there is usually no need to invalidate statement when constraint changed.
Regards,
Vlad