Subject | Re: [IBO] Re: D2007 |
---|---|
Author | Helen Borrie |
Post date | 2007-07-27T05:30:14Z |
At 11:20 AM 27/07/2007, Jason wrote:
or as sub_type 1 (aka TEXT) where the character set has not been
specified for the blob. In either case, the engine has no clues as
to the content. Otherwise, the engine takes care of it and correctly
partitions the sequences.
As for segment boundaries, this should not cause problems either,
provided the field has been correctly specified. The engine ignores
the specified segment size completely and applies its own rules for
segmentation at storage time.
Helen
>I still have yet to accomplish this with BLOB MEMO columns. I'mIt is doubtful, I believe.
>wondering if this shouldn't be handled instead using a blob filter.
>My hunch is that this could be very problematic with UTF8 encodingI guess it could cause problems if the blob is stored as sub_type 0,
>because you could have one effective character that maps into
>multiple bytes to get severed at a blob segment boundry. Thus, the
>data of a segment shouldn't be transliterated unless you know for
>sure a character sequence is complete. I suspect this is going to
>be a real pain to deal with in the engine in terms of character
>length vs. byte length issues.
or as sub_type 1 (aka TEXT) where the character set has not been
specified for the blob. In either case, the engine has no clues as
to the content. Otherwise, the engine takes care of it and correctly
partitions the sequences.
As for segment boundaries, this should not cause problems either,
provided the field has been correctly specified. The engine ignores
the specified segment size completely and applies its own rules for
segmentation at storage time.
Helen