Subject | Re: [firebird-support] Blob segment size |
---|---|
Author | Ann W. Harrison |
Post date | 2006-10-24T21:09:33Z |
Stephen Boyd wrote:
you use a segment size as long as the length fits in a 16 bit
integer. The problem is that gpre creates a fixed size buffer
for segments using the RDB$SEGMENT_LENGTH as it's guide. So the
problem is in the application, not the database. Applications
that create their own buffers for blob segments (rather than
relying on dumb ol' gpre) aren't affected.
The problem occurs when filling the buffer in the application,
not when reading from the database. The blob_get call takes
a buffer length and won't overflow.
somewhere as signed. As a rule, 65K objects tend to cause problems
because the wrappers put around them cause a length to overflow, so
we normally suggest 32K as a max length.
Regards,
Ann
>The issue is not with Firebird, which doesn't care at all what
> "Normally, you should not attempt to write segments larger than the
> segment length you defined in the table; doing so may result in buffer
> overflow and possible memory corruption."
you use a segment size as long as the length fits in a 16 bit
integer. The problem is that gpre creates a fixed size buffer
for segments using the RDB$SEGMENT_LENGTH as it's guide. So the
problem is in the application, not the database. Applications
that create their own buffers for blob segments (rather than
relying on dumb ol' gpre) aren't affected.
The problem occurs when filling the buffer in the application,
not when reading from the database. The blob_get call takes
a buffer length and won't overflow.
>Beats me. The length is a 16 bit integer. It's probably interpreted
> Also, I have found two different claims for the maximum length of a
> Blob segment. 65K and 32K. Does anyone know which it is?
>
somewhere as signed. As a rule, 65K objects tend to cause problems
because the wrappers put around them cause a length to overflow, so
we normally suggest 32K as a max length.
Regards,
Ann