Subject | Re: [Firebird-Architect] Metadata cache |
---|---|
Author | Alexandre Benson Smith |
Post date | 2008-08-10T03:44:13Z |
Hi !
Please, at first excuse my ignorance...
I think in the ideal world any kind of metadata changes should be
allowed no matter if the objects is in use or not. And I think this is
the desired goal (and wich I like it to be).
When I read "versoined metadata" I think something exactly as
transaction isolation as one would get from "normal" records.
Lets suppose:
T1: Create Procedure P1
T1 Commit
T2: Starts
T2 Execute Procedure P1
T3 Starts
T3 alter procedure P1
T3 Commit
T2 execute procedure P1 (version created by T1)
T2 Commit
T4 starts
T4 execute prcoedure P1 (version created by T3)
This would be the normal behaviour if the BLR would be read from system
tables every time (in a non read commited transaction), so the problem
lies on how to handle it on Metadata Cache. It's too simplistic to think
that in the metadata cache every object holds a transaction visibility
flag (number) and more than a copy of the same object could stay in memory ?
At first... What's the desired behaviour ? in a single transaction every
call to a SP execute the very same code ? or the execution code be
refreshed as soon as possible ?
Let's make a paralel with table changes...
If I add or drop a field from a table, the already started transaction
won't receive that field or automagically misses that field, why it
should be differente from SP's ?
Regarding prepared statement, I think that when the transaction commits
and the new version would be visible to that already prepared statement,
it should be re-prepared, if no error (SQL level error ? like selecting
a field that doesn't exists anymore) the statement would be execute
flawlessly, if for example, there is a field on the WHERE clause that is
missing, an exception would be raised and the application should behave
and handle it as it was the original prepare.
Regard views for example.
I think the application should see the view as it was commited when
transaction starts, if it's changed just when a new transaction starts
it will become visible in the new version.
If an application has long running transactions, it's a problem of the
developer to make it poor designed and would have the old (buggy) SP
code running during the whole day...
What if FB sends EVENTS alerting the application about metadata changes ?
This is a to simplistic way to see the problem ?
Sorry for bothering with my view, it could be completelly wrong. Just
take my thoughts as how I would like to see it working...
see you !
--
Alexandre Benson Smith
Development
THOR Software e Comercial Ltda
Santo Andre - Sao Paulo - Brazil
www.thorsoftware.com.br
Please, at first excuse my ignorance...
I think in the ideal world any kind of metadata changes should be
allowed no matter if the objects is in use or not. And I think this is
the desired goal (and wich I like it to be).
When I read "versoined metadata" I think something exactly as
transaction isolation as one would get from "normal" records.
Lets suppose:
T1: Create Procedure P1
T1 Commit
T2: Starts
T2 Execute Procedure P1
T3 Starts
T3 alter procedure P1
T3 Commit
T2 execute procedure P1 (version created by T1)
T2 Commit
T4 starts
T4 execute prcoedure P1 (version created by T3)
This would be the normal behaviour if the BLR would be read from system
tables every time (in a non read commited transaction), so the problem
lies on how to handle it on Metadata Cache. It's too simplistic to think
that in the metadata cache every object holds a transaction visibility
flag (number) and more than a copy of the same object could stay in memory ?
At first... What's the desired behaviour ? in a single transaction every
call to a SP execute the very same code ? or the execution code be
refreshed as soon as possible ?
Let's make a paralel with table changes...
If I add or drop a field from a table, the already started transaction
won't receive that field or automagically misses that field, why it
should be differente from SP's ?
Regarding prepared statement, I think that when the transaction commits
and the new version would be visible to that already prepared statement,
it should be re-prepared, if no error (SQL level error ? like selecting
a field that doesn't exists anymore) the statement would be execute
flawlessly, if for example, there is a field on the WHERE clause that is
missing, an exception would be raised and the application should behave
and handle it as it was the original prepare.
Regard views for example.
I think the application should see the view as it was commited when
transaction starts, if it's changed just when a new transaction starts
it will become visible in the new version.
If an application has long running transactions, it's a problem of the
developer to make it poor designed and would have the old (buggy) SP
code running during the whole day...
What if FB sends EVENTS alerting the application about metadata changes ?
This is a to simplistic way to see the problem ?
Sorry for bothering with my view, it could be completelly wrong. Just
take my thoughts as how I would like to see it working...
see you !
--
Alexandre Benson Smith
Development
THOR Software e Comercial Ltda
Santo Andre - Sao Paulo - Brazil
www.thorsoftware.com.br