Subject | Re: [firebird-support] UTF8 in firebird ? |
---|---|
Author | Geoff Worboys |
Post date | 2012-01-06T23:49:12Z |
Vander Clock Stephane wrote:
implement function at _acceptable_ speed. If speed were my
primary objective I would still be working with text
interfaces rather than GUIs.
I am far from convinced that your testing reveals real-world
differences between the current UTF8 implementation vs any
practical alternative (which neither ISO_8859 nor OCTETS
represent).
[...]
for something like this, but I can certainly see issues from
the application/API side (not knowing the actual capacity of
a field until you get the user data). I think you are moving
the burden from memory and storage (which are relatively cheap)
to code complexity (which can be incredibly expensive).
Unless/until you can prove a very significant real-world
benefit (which your current primitive tests do not, they only
show that you're better off using a single-byte character set
if you don't need unicode), I can't really see it being an
attractive alternative. Of course lots of people will be very
happy if you prove me wrong, even me. :-)
--
Geoff Worboys
Telesis Computing Pty Ltd
> i not understand, you spend so much in developpement to winActually I spend most of my time in development trying to
> speed, you make that you can even optimize some stuff like
> the TcpRemoteBufferSize and here i gave you an option to make
> your system 2x more faster "easily" and i have as an answer
> "wear the cost" ??
implement function at _acceptable_ speed. If speed were my
primary objective I would still be working with text
interfaces rather than GUIs.
I am far from convinced that your testing reveals real-world
differences between the current UTF8 implementation vs any
practical alternative (which neither ISO_8859 nor OCTETS
represent).
[...]
> Keep utf8 like it is if you want, but why not add a newI cannot speak to the true complexities inside the engine
> charset like UTF8_SVDC that is completely egual to UTF8
> except that it's considere that when i write varchar(250)
> = 250 bytes (or 250 code point if you prefere) ?
for something like this, but I can certainly see issues from
the application/API side (not knowing the actual capacity of
a field until you get the user data). I think you are moving
the burden from memory and storage (which are relatively cheap)
to code complexity (which can be incredibly expensive).
Unless/until you can prove a very significant real-world
benefit (which your current primitive tests do not, they only
show that you're better off using a single-byte character set
if you don't need unicode), I can't really see it being an
attractive alternative. Of course lots of people will be very
happy if you prove me wrong, even me. :-)
--
Geoff Worboys
Telesis Computing Pty Ltd