Subject Re: [firebird-php] transactions .... I thought I got it, little did I know.
Author Helen Borrie
At 12:01 AM 27/10/2007, you wrote:
>hi Helen,
>
>Helen Borrie wrote:
>>
>>
>> At 11:25 PM 26/10/2007, you wrote:
>>
>>>still I would like to know if it should be possible (is some way) to
>> perform
>>>SELECT queries AND INSERT queries within a single transaction and have
>> the SELECT
>>>queries be able to see the changes made by prior INSERT queries that
>> have not yet
>>>been committed, but were performed within the context of the same
>> [active] transaction.
>>
>> Yes. As long as the select and the dml statements are executed within
>> the same uncommitted transaction, refreshing your select set should show
>> the entire view that your transaction has of the table.
>
>hmm. okay - which begs the question why I can't see uncommitted INSERTs
>with subsequent SELECTs (all within a single transaction).
>
>I've tried every feasable combination of transaction arguments.
>I've also tried various combinations of using a concurrent readonly transactions
>for select statements and using the default transaction for select statements
>(I tried various strategies for committing the 'read only' transactions after
>every statement) ... no joy

That makes no sense. If you have a read-only transaction for selects and a read-write transaction for DML, neither transaction can see what the other sees. That's why it is called "transaction isolation". If you want your select to see uncommitted changes then the select MUST be in the same transaction as the DML executions.

>>>personally I feel like it should be possible (maybe given certain
>> transaction arguments when the transaction is started) to see
>> uncommitted changes within the confines of the transaction that made them.
>>
>> Exactly so. No special transaction arguments are needed. But don't waste
>> resources on creating a new select statement unless you need to. If the
>> reselect is the same then your initial query is still prepared. Just do
>> whatever it takes to empty the buffers and call the query again using
>> the same statement handle.
>
>the simplest case I have has 2 unprepared queries within a single transaction:
>INSERT then SELECT.
>
>I'll be honest I don't give a hoot about wasted resources right now, I'll
>figure out optimization when it works the way I want it :-)

Then test out the simple case above using isql - one instance, no commits.

>>
>> Understand, though, that rolling back the transaction will roll back
>> EVERYTHING you have done in the transaction, including any DML that you
>> determined to be OK. This looks to me like a case where an executable
>> procedure call with exception handling and a return value would suit
>> your requirements better than a messy mix of executes and selects that
>> you might not have a lot of control over...
>
>true that this might be a better solution - and it would be doable if the
>problem in question was a particular issue, unfortunately my issue is rather
>more generic ... it's part of a complex ORM setup which is actually in use by
>a number of big projects ... as such a solution must neither change the DB schema nor require changes to data object definitions.
>
>so the only place I can make the fix (however inefficient, gnarly or just plain ugly) is in my DB class and the related ORM base class.

It *looks* as though your DB class is being too rigorous about committing - at least that is where I would want to look if it was my code. If it is imperative that this class should never commit DML changes then a "DB class" is probably too broad. You want a connection to persist until you are ready to detach it. You want a transaction to last the shortest possible time and use the least possible resources.

You have two distinct sets of behavioural requirements here. Treat your group of interdependent statements as a task. You need a connection class that creates a transaction *task* class that is created and destroyed for *each* task instance. Its destructor must roll back the transaction that it wraps and clean up its resources.

Rolling back a transaction makes garbage in the database "stickier" than committing one. Performing multiple DML operations on a single record makes a lot of garbage and eventually will cause a "too many savepoints" exception. Your transaction class needs to keep some accounting of such things and know the appropriate action to take when things are likely to go "pop"....

Helen