Subject | Invalidating Objects was: Statement Cache [was: User name SYSDBA] |
---|---|
Author | Leyne, Sean |
Post date | 2005-08-31T16:17:38Z |
Dmitry,
cached statements as invalid -- which would not be true if this new
feature were to be implemented.
I think the new feature will have an impact on the statement cache, but
don't have a problem discussing it separately.
I think we need to look at the tradeoffs.
In one case, you can apply any schema change while the database is
running and all the objects will re-validated as necessary. This will
make almost all developers very happy.
In the other case, if a developer does something 'stupid'/untested the
impact of the schema change could be 'unfortunate'. This will make some
people unhappy.
Personally, I believe that a system should provide the greatest
flexibility. If that means that some people get hurt, well it's their
own fault.
ValidateSchema ON) which could wrap the schema changes and force the
system to revalidate all affected objects during the change. Thus the
impact of any change could be determined and rejected if the validation
fails.
Another possible feature would be a new command ValidateSchema, which
would attempt to validate all invalidate objects and report those which
failed to be validated.
Sean
> "Ann W. Harrison" <aharrison@...> wrote:the
> >
> > What is being proposed is a change to the current behavior.
>
> IMO, this discussion has very little to do with the statement cache. I
> don't
> know why Sean has raised it here, but if we proceed, could we rename
> subject line?I brought it up, because Jim said there would never be a need to mark
cached statements as invalid -- which would not be true if this new
feature were to be implemented.
I think the new feature will have an impact on the statement cache, but
don't have a problem discussing it separately.
> > In the case of changing a field definition, existing - running -most
> > instances of the object would could continue to run unchanged, and
> > of the recompilations of referencing objects would occur withouterror.
> > However, if an object were deleted or changed in an incompatibleway,
> > the damage caused would not be discovered until some time later whenthe
> > referencing objects failed to compile.unhappy.
>
> Correct. And I agree such a hidden error could make some people
I think we need to look at the tradeoffs.
In one case, you can apply any schema change while the database is
running and all the objects will re-validated as necessary. This will
make almost all developers very happy.
In the other case, if a developer does something 'stupid'/untested the
impact of the schema change could be 'unfortunate'. This will make some
people unhappy.
Personally, I believe that a system should provide the greatest
flexibility. If that means that some people get hurt, well it's their
own fault.
> Oneon a
> real world example: after a programming error made within one of the
> kernel
> procedures, about 80% of the schema objects became invalid. It took 4
> hours
> to recompile them (automated iterations based on a dependency graph)
> dual PIII machine. In this particular case, I'd prefer to treat anerror
> as a show-stopper :-)One suggestion for this would be to provide a runtime 'mode' (SET
ValidateSchema ON) which could wrap the schema changes and force the
system to revalidate all affected objects during the change. Thus the
impact of any change could be determined and rejected if the validation
fails.
Another possible feature would be a new command ValidateSchema, which
would attempt to validate all invalidate objects and report those which
failed to be validated.
Sean