Subject Re: [firebird-php] Re: Proper transaction & deadlock handling
Hi Daniel,

After mulling over this topic I seem to remember experiencing the same confusion over this issue when I first encountered it and wondered why I didn't get an error message and/or code when there is a deadlock. Testing the error code itself is not enough. (So I take back what I said at first!)

In my case I was executing a query to update a single record, with the knowledge that another process might also be trying to update the same record at the same time.

As I see it these are some of the possible situations resulting from ibase_query.

(a). false is returned.
This likely indicates a fatal error, as shown by ibase_errmsg/ibase_errcode.
If you rollback and try again you will most likely get the same error.
You'd probably want to rollback the transaction and flag this up in your system as a serious error to be corrected.

(b). 1 is returned.
This indicates the update was successful.
Commit the transaction, everything is ok.

(c). 0 is returned
The update failed, probably due to a deadlock.
Rollback the transaction.
Try again, you might be successful next time.
(I can't remember for sure what errcode is for this one, but as you indicated there may not be any error code or message.)

Hence, as Lester has indicated the coding of this (case (c.)) in PHP is not ideal, and not obvious from the documentation. I'm not sure if there is an easy way to get the deadlock warning message that gets written to the log, or if it is worth the effort to do so. (I'm not sure of what else will lead to a return value of 0 other than a deadlock if you are expecting to update exactly 1 record.)

If your query can update more than one row then you might need to first determine how many rows it is supposed to update, then compare that with the returned value.

I've not tested this, but at a guess if a 1000 records were to be updated and the return value was 999 this might inficate that one record had a deadlock and  couldn't be updated, but 999 could. So perhaps you could commit the 999 and retry the 1 failure, which might be more efficient than rolling back all 1000 updates and retrying the whole lot. (As I say, just a guess following the same "logic".).

As a side note in my case, the two processes were updating different fields in the same record, so the long term solution was to split the data across two tables to avoid the deadlocks. The above solution worked in the interim.