Subject Re: [firebird-support] Blob segment size
Author Ann W. Harrison
Stephen Boyd wrote:
>
> "Normally, you should not attempt to write segments larger than the
> segment length you defined in the table; doing so may result in buffer
> overflow and possible memory corruption."

The issue is not with Firebird, which doesn't care at all what
you use a segment size as long as the length fits in a 16 bit
integer. The problem is that gpre creates a fixed size buffer
for segments using the RDB$SEGMENT_LENGTH as it's guide. So the
problem is in the application, not the database. Applications
that create their own buffers for blob segments (rather than
relying on dumb ol' gpre) aren't affected.

The problem occurs when filling the buffer in the application,
not when reading from the database. The blob_get call takes
a buffer length and won't overflow.

>
> Also, I have found two different claims for the maximum length of a
> Blob segment. 65K and 32K. Does anyone know which it is?
>

Beats me. The length is a 16 bit integer. It's probably interpreted
somewhere as signed. As a rule, 65K objects tend to cause problems
because the wrappers put around them cause a length to overflow, so
we normally suggest 32K as a max length.

Regards,


Ann