|Subject||RE: [IBDI] Character sets - for the IBDH database|
> The only cost is as you know better than me that each UNICODEI'm not terribly concerned about 3-byte characters at this point - as long
> will use two
>or three bytes per each unicode-char, so you db grows faster and indexes are
>more limited in their length.
>Claudio Valderrama C.
as we aren't shooting entire documents around the Internet, it should not
be of concern. As for index sizes, there are strategies to deal with that.
I'm not interested in collate orders and, anyway, I see unicode_fss has
only one, its own.
OK, now where are we?
I've rebuilt the DB with unicode_fss as the charset but I'm still getting
the transliteration error when trying to post a sequence of Cyrillic
Somebody said if you used the default charset (NONE) the db would accept
any character. Not so. Same problem, at least with WISQL.
I'm comfortable with modifiying my db structure so I don't mix charsets
within a table and instead store the language-specific stuff in its own
tables, without disturbing the XML tree.
I need some way to confirm for myself that IB doesn't care what it gets in
its input stream and, after that, to find some way of interfacing with the
data. I'm making a digest of all ideas and suggestions, many of which will
come back in the book as resources for the XML topic.<g>
If I can find a raw interface option that works, i.e. actually accepts a
row, I have a starting point. Any suggestions?
Thanks all for your input.
"Ask not what your free, open-source database can do for you,
but what you can do for your free, open-source database."