Subject | Re: [firebird-php] Re: Proper transaction & deadlock handling |
---|---|
Author | Daniel Miller |
Post date | 2016-11-23T19:24:59Z |
I've adjusted my error routines slightly - maybe they weren't actually
logging failed ibase_query calls. I don't think so - but we'll see.
Changing the transaction flags doesn't seem to have resolved the issue
as I saw another deadlock warning this morning.
I'm doing everything I can to minimize the possibility of deadlocks.
Every read is explicit read in a transaction. Every write is minimized
in an explicit transaction. Some writes are multi-table but within an
explicit transaction. I don't even keep the database handle open -
because my "servers" don't have constant events, and "peak" load seems
to be 1 write per second, I explictly close my database handle and
re-open the connection for every read or write "batch" (if I need to
query multiple tables based on a server event or user query it does via
a single connection and then closes). Sounds stupid and "expensive" but
I'm trying, darn it!
Daniel
logging failed ibase_query calls. I don't think so - but we'll see.
Changing the transaction flags doesn't seem to have resolved the issue
as I saw another deadlock warning this morning.
I'm doing everything I can to minimize the possibility of deadlocks.
Every read is explicit read in a transaction. Every write is minimized
in an explicit transaction. Some writes are multi-table but within an
explicit transaction. I don't even keep the database handle open -
because my "servers" don't have constant events, and "peak" load seems
to be 1 write per second, I explictly close my database handle and
re-open the connection for every read or write "batch" (if I need to
query multiple tables based on a server event or user query it does via
a single connection and then closes). Sounds stupid and "expensive" but
I'm trying, darn it!
Daniel
On 11/23/2016 3:21 AM, Lester Caine lester@... [firebird-php] wrote:
> On 23/11/16 10:25, masotti masotti@... [firebird-php] wrote:
>>>> I've not tested this, but at a guess if a 1000 records were to be
>>>> updated and the r eturn value was 999 this might inficate that one
>>>> record had a deadlock and couldn't be updated, but 999 could. So
>>>> perhaps you could commit the 999 and retry the 1 failure, which might be
>>>> more efficient than rolling back all 1000 updates and retrying the whole
>>>> lot. (As I say, just a guess following the same "logic".).
>> Perhaps it's not possibile to commit the transaction in this case: the
>> deadlocked record would refuse and commit return FALSE. Transaction
>> atomicity must be respected.
> That was my thought here ... although flamerobin will give information
> on why the commit could not be completed? That same information should
> be available in PHP? I am sure there is room for improvement in PHP on
> the generic driver, but PDO has to work with the 'lowest common
> denominator' that works for all database engines.
>
>> AFAIK rollback is the only chance, and happens automagically at end of
>> script execution.
> Or you call a rollback and try again. My 'example' is always stock
> control where one has multiple clients taking stock for orders and each
> has a transaction wrapping the whole order. The 'deadlock' is when two
> clients are trying to grab the same stock so first one wins and takes
> the stock while the second one then has to decide to drop an item, or
> the whole order. Some of my suppliers using crap database engines just
> take the order anyway and email you later that there was a problem ;)
>