Subject | Re: [IBO] INVALID BLOB ID - PutVarSlice |
---|---|
Author | Helen Borrie |
Post date | 2006-11-03T05:57:53Z |
Johannes,
At 03:35 PM 3/11/2006, you wrote:
blobs? Array slices are a particular problem on the client side and
errors associated with regular blobs don't have anything to do with "slices".
However, it's in the nature of the way blobs are stored and updated
that your client code or stored procedure code can cause a
mess. When you post a record containing a new blob - which applies
both to a insert or an update - the engine creates a temporary blob
id which it associates with that data in the uncommitted record. All
is well if you do nothing more to that record before committing or
rolling back the work.
However, if you allow it so that the user can update the record more
than once in the same transaction, or can update a record that s/he
has inserted but not committed, then you will mess up the blob id's
and cause the Invalid Blob ID execption.
I haven't *known* the problem to occur under the conditions where the
*only* update that is done before modifying the blob is the "dummy
update" that is performed when you use IBO's PessimisticLocking. I
recall some years ago writing an application to test it (probably
during Firebird 1.0 days) and it seemed OK.
But it has *never* been a healthy thing with MGA to make multiple
hits on the same record in the same transaction. I have no idea
where IB 7.5 is "at" in this regard. However, in Firebird 2.0, you
will get an exception if you try to double-hit the same record. This
should pre-empt these Invalid Blob ID errors (along with Too Many
Savepoints error) in DSQL at least, although I think it is still
possible to get them in a carelessly designed trigger or SP.
I mentioned the PessimisticLocking thing because I think it is at
least *possible* that the new restriction in Fb 2.0 will break IBO's
implementation of this. It would be interesting to hear from anyone
who is using PL with Firebird 2.0.
ARRAY, check out any places where you have open-ended tasks that
don't take care to commit work after a new blob has been posted...I'm
not the world's biggest fan of cached updates, but this is one place
where a carefully managed dataset cache would be of benefit.
Helen
At 03:35 PM 3/11/2006, you wrote:
>Good day allAre you talking about invalid blob id's on array slices or on
>-=00=-=-0=-0=-0
>We have had problems in the past with invalid blob ID's. WE thought
>that moving the interbase version higher than 5.6 will fix
>it , but this seems is not the case.
>
>Then today I found some comments ins
>
>procedure TIB_ColumnArray.PutVarSlice( const Values: variant );
>
>Where it states near the end
>
> SysSetIsNull( false );
>// Need to finish the auto re-fetch of the new ARRAY ID after this
>call is made.
>// Otherwise, an Invalid BLOB ID error is generated if a call to get_slice
>// is performed using the temporary Array ID that put slice creates.
> SysAfterModify;
>end;
>
>Now what I want to know if there is a possibility that this is the
>reason for the Invalid blob Id's that we get every now and again.
blobs? Array slices are a particular problem on the client side and
errors associated with regular blobs don't have anything to do with "slices".
However, it's in the nature of the way blobs are stored and updated
that your client code or stored procedure code can cause a
mess. When you post a record containing a new blob - which applies
both to a insert or an update - the engine creates a temporary blob
id which it associates with that data in the uncommitted record. All
is well if you do nothing more to that record before committing or
rolling back the work.
However, if you allow it so that the user can update the record more
than once in the same transaction, or can update a record that s/he
has inserted but not committed, then you will mess up the blob id's
and cause the Invalid Blob ID execption.
I haven't *known* the problem to occur under the conditions where the
*only* update that is done before modifying the blob is the "dummy
update" that is performed when you use IBO's PessimisticLocking. I
recall some years ago writing an application to test it (probably
during Firebird 1.0 days) and it seemed OK.
But it has *never* been a healthy thing with MGA to make multiple
hits on the same record in the same transaction. I have no idea
where IB 7.5 is "at" in this regard. However, in Firebird 2.0, you
will get an exception if you try to double-hit the same record. This
should pre-empt these Invalid Blob ID errors (along with Too Many
Savepoints error) in DSQL at least, although I think it is still
possible to get them in a carelessly designed trigger or SP.
I mentioned the PessimisticLocking thing because I think it is at
least *possible* that the new restriction in Fb 2.0 will break IBO's
implementation of this. It would be interesting to hear from anyone
who is using PL with Firebird 2.0.
>It is very hard for us to emulate the problems as it is intermittentAssuming we ARE talking about blobs here, and not fields of type
>at clients, thus we cannot get to the exact line of code that is the problem.
ARRAY, check out any places where you have open-ended tasks that
don't take care to commit work after a new blob has been posted...I'm
not the world's biggest fan of cached updates, but this is one place
where a carefully managed dataset cache would be of benefit.
Helen