Subject | Re: Lock update for checkin/checkout? |
---|---|
Author | h_urlaf |
Post date | 2004-02-04T07:48:35Z |
--- In firebird-support@yahoogroups.com, Helen Borrie <helebor@t...>
wrote:
effect of "if you don't understand the implications, you don't want to
be messing with this", so... ;)
flag column I set?
I perform a write in a SELECT?
The thing is, I'd prefer to have the actual blob transform occur
outside the database. It uses external tools, so what I'd like to do
is:
1. Select a row that needs conversion and mark it hands-off to other
connections
2. Fetch the blob and store it in the filesystem
3. Convert
4. Update the row, storing the converted blob.
be working on one BLOB at the time.
conversion is costly (time-wise and cpu-wise) and I want to be able to
run these conversions on different machines if need be.
Emiliano
wrote:
> AFAICT from your description, this is exactly the kind of "strictlyOK, that's good to hear. The release notes mention something to the
> serial" scenario that WITH LOCK was designed for,
effect of "if you don't understand the implications, you don't want to
be messing with this", so... ;)
> as long as you isolate the SELECT statement from which selectionsYou mean that the other selects should explicitly exclude based on the
> are being made inside one transaction from the UPDATE statement
> that is being used to trap the row for your update operation in
> another transaction.
flag column I set?
> Both transactions should be READ COMMITTED,I can do that per-transaction?
> the one with the update should have NO WAIT isolation,The initial SELECT ... WITH LOCK would be read-only, right? How could
> the first read-only (if possible) and both of them
I perform a write in a SELECT?
> explicit (no Autocommit).OK, got that.
> Use the first query to select the unique key of the row to beAh, OK.
> updated and the second to invoke a stored procedure that takes the
> unique key as its input argument and performs the update.
The thing is, I'd prefer to have the actual blob transform occur
outside the database. It uses external tools, so what I'd like to do
is:
1. Select a row that needs conversion and mark it hands-off to other
connections
2. Fetch the blob and store it in the filesystem
3. Convert
4. Update the row, storing the converted blob.
> Inside the SP, trap the row into a cursor, do the conversion, andWouldn't that be 'the' invocation of the SP? One converter will always
> exit. Commit the transaction as soon as one invocation of the SP
> completes.
be working on one BLOB at the time.
> CREATE PROCEDURE CONVERT_BLOB (ThePKey BIGINT)function*/
> AS
> DECLARE VARIABLE T_ID BIGINT;
> DECLARE VARIABLE TheBlob BLOB SUB_TYPE 0;
> DECLARE VARIABLE <other cols you need for the operation>
> ...,
> DECLARE VARIABLE TheNewBlob BLOB SUB_TYPE 0;
> BEGIN
> FOR SELECT <list of columns>
> FROM TheTable
> WHERE PK_of_Table = :ThePKey
> FOR UPDATE OF TheBlobColumn WITH LOCK
> INTO
> :T_ID,
> :TheBlob,
> :<other column variables>
> AS CURSOR Crsr
> DO
> BEGIN
> TheNewBlob = MassageBlob(:TheBlob); /*Your conversion
> UPDATE TheTableI see... but much of this logic would run outside Firebird since the
> SET TheBlobColumn = :TheNewBlob
> WHERE CURRENT OF Crsr;
> WHEN ANY DO /* or find out the gdscode for pess. lock */
> EXCEPTION MyException;
> END
>
> END
conversion is costly (time-wise and cpu-wise) and I want to be able to
run these conversions on different machines if need be.
> To handle the failure, create a custom exception MyException withAh! I see.
> some friendly message like 'Someone else is already updating this
> record', or trap the blockage directly using the gdscode.
> I don't know the gdscode that would be returned for the blocked WITHMany thanks, Helen. Eagerly awaiting the release of your book,
> LOCK request -- I'm assuming it's record_lock (335544476) but this
> question prompts me to find out and get it into the release notes!
Emiliano