Subject Re: [firebird-support] Re: Blob - Invalid segment size
Author Helen Borrie
At 09:15 AM 3/06/2008, you wrote:
>> >
>> > Context: Blob::Write
>> > Message: Invalid segment size (max 64Kb-1)
>> >
>> > If segment size isn't an issue, why does Firebird even
>> >have this error message?
>> Firebird doesn't have this error message. Have you considered
>raising this problem in the IBPP forum? It looks possible that IBPP
>has some kind of logical error occurring when you have a blob that is
>close to the upper boundary of the size limit on the buffer IBPP is
>using to pass blob streams across the wire.
>Thanks Helen:
> I didn't realize that this is an error originated with IBPP.
>It's hard for me to tell.

Most interfaces will give you the 9-digit code associated with an error returned by Firebird. You can also download the full load of codes and associated (English language) messages from the Documentation Index (although I suspect that I haven't added the 2.1 codes to the index yet).

> From Firebird's perspective, can any amount of data be written
>to a blob in a single operation? Without segmenting it?

Broadly, yes. In the modern era the segmenting of blobs occurs at the server side, according to some algorithm determined by page size and probably also whether blob data is stored on blob pages or data pages. It is not affected by any segment size you define for your blob column so you can safely forget about it. By default the segment size will be 80 bytes but the engine doesn't care about that, either. (This is an example of the 'qwerty phenomenon': 80 bytes used to be the buffer size of one line of ascii text on an ascii terminal and times were so much simpler when the whole world spoke American!)

Any kind of "segmenting" a pre-packaged API wrapper does for you is not "segmenting" in Firebird terms. It is management of the buffer defined at the client to feed or receive blob content piece-by-piece across the wire according to the client language rules, the vicissitudes of the blob api and the sizes of packets. If you're curious, it shouldn't be too hard for you to grab the IBPP source and figure out how its buffering of blob content works, vis-a-vis the blob api. I'm not volunteering today. ;-)