Subject | Re: [Firebird-Architect] Metadata locking policy and caching |
---|---|
Author | Alex Peshkov |
Post date | 2008-08-06T07:29Z |
On Tuesday 05 August 2008 17:32, Adriano dos Santos Fernandes wrote:
supporting DYN. As for me - the only problem is to rewrite about 20 tests in
TCS. Really not critical.
not more then minor performance issue, it's just a cache. BTW, same could be
applicable for metadata cache, but I'm afraid we already have some direct
references to it...
for me.
I suppose the worst problem here that even for ReadCommitted transaction (not
talking about CS, where connection should be dropped to have new procedure
version - definitely bad behaviour, bug to be clear) one gets old version of
procedure.
use) it even seems rather logical. But it's also not always OK. Imagine one
wants to modify an object in some real-time accounting database. Killing any
transaction in this case is not good - imagine phone calls accounting for
example.
Please take into an account that this is VERY SERIOUS change in firebird
behaviour - since interbase days metadata modification was possible in
parallel with use of that objects. I'm sure that it will create a wave of
problems for our users not smaller than ones related with changes like
SQL-standard "NOT IN" or inability to use BLOBs in "SELECT DISTINCT". I will
be very happy to see a solution of MT-safe, clear, simple to program, etc.
metadata cache. But I do not like when it breaks logic and features which
existed for years.
> >> To not make the (few) users doing DDL changes with GPRE very unhappy, weSuppose we should ask about it in admin. Or we will have to continue
> >> may introduce EXECUTE STATEMENT in GPRE. What you think?
> >
> > Good solution, and should work. But what to do with gdef? It's still used
> > by TCS (very active), and I'm afraid can be used by some people.
>
> I think we should remove it and rewrite our tests. We're already not
> supporting old OSs, why we need to continue supporting this dinosaur if
> it enters in our way? Users will continue be able to use old utilities
> without upgrade. :-)
supporting DYN. As for me - the only problem is to rewrite about 20 tests in
TCS. Really not critical.
> > Not agreed. Full DDL transactions support and ability to modify metadataWe may drop prepared statements cache on commit of DDL transaction. This is
> > on the fly are both desired.
>
> There is another complication factor here. Statements prepared in one
> transaction may run in another one. How metadata versions could work in
> this situation?
not more then minor performance issue, it's just a cache. BTW, same could be
applicable for metadata cache, but I'm afraid we already have some direct
references to it...
> > Engine should better track versions itself insteadAdriano, you start to invent different rules for different objects. Not good
> > of suggesting people to do it manually using selects from MON$ system
> > tables.
> >
> >> and not work in some conditions.
> >>
> >> When you have table changes, versions doesn't apply. Why one wants to
> >> update an already dropped column?
> >
> > If transaction was started before column was dropped, it must see it.
>
> Ok. For tables, lock rules may be less restrictive.
for me.
> >> When one founds a bug in a procedure, he fixes it and the fix may needHave not seen this, but may be someone really complains.
> >> data changes (caused by previously bugged code). Maintaining others
> >> running obsolete code may be a very bad thing.
> >
> > Have you ever seen a bug report, that use of old copy of procedure (in
> > CS, for example) caused problems when new one existed? I do not.
>
> I introduced DSQL cache lock to fix a problem. :-)
>
> We generally see users complaining about old procedure and triggers
> versions running even in SS. We point them that this is documented in
> IB6 docs.
I suppose the worst problem here that even for ReadCommitted transaction (not
talking about CS, where connection should be dropped to have new procedure
version - definitely bad behaviour, bug to be clear) one gets old version of
procedure.
> > Generally, we havefixed a digit to make future discussion a little clearer :))
> > the following opportunities:
> > 1. Let old transactions continue using old versions.
> > 2. Do not let modify any object, used in any other running request
> > (transaction, prepared statement, etc.).
> > 3. Kill and unwind any request, using an object we want to modify (BTW,
> > ideal when related data changes are needed).
> > Certainly, both 2 and 3 are easier to implement, and they provide
> > somewhat stricter logic. But I'm sure users will not like any of it.Yes, it can. With FORCE being a transaction parameter (certainly, only DBO can
>
> 3 may be implemented combined with 2, using FORCEd DDL changes. For
> example, "DROP PROCEDURE sp1 FORCE" (or some other syntax) will kill
> everyone using it. FORCE may also be a transaction parameter.
use) it even seems rather logical. But it's also not always OK. Imagine one
wants to modify an object in some real-time accounting database. Killing any
transaction in this case is not good - imagine phone calls accounting for
example.
Please take into an account that this is VERY SERIOUS change in firebird
behaviour - since interbase days metadata modification was possible in
parallel with use of that objects. I'm sure that it will create a wave of
problems for our users not smaller than ones related with changes like
SQL-standard "NOT IN" or inability to use BLOBs in "SELECT DISTINCT". I will
be very happy to see a solution of MT-safe, clear, simple to program, etc.
metadata cache. But I do not like when it breaks logic and features which
existed for years.