Subject | Re: [firebird-support] Why can't CHAR and VARCHAR store binary data (ASCII 0)? |
---|---|
Author | David Garamond |
Post date | 2003-11-29T00:42:44Z |
Ivan Prenosil wrote:
==========================================================================
C:\>isql -u sysdba -p masterkey
create database 'c:\tmp\test.fdb' default character set binary;
/* import udfs ... */
create table t1 (
id int not null primary key,
code varchar(2) not null);
commit;
create unique index idx1 on t1(code);
commit;
/* 1,3,4,5 is rejected due to duplicate value */
insert into t1 values (0,'');
insert into t1 values (1,ascii_char(0));
insert into t1 values (2,ascii_char(32));
insert into t1 values (3,ascii_char(0)||ascii_char(0));
insert into t1 values (4,ascii_char(0)||ascii_char(32));
insert into t1 values (5,ascii_char(32)||ascii_char(0));
insert into t1 values (6,ascii_char(32)||ascii_char(32));
commit;
select
id,
strlen(code),
ascii_val(substr(code,1,1))||'-'||ascii_val(substr(code,2,2)) as code
from t1;
==========================================================================
When I try inserting the records without the unique index, the select
result is:
ID STRLEN CODE
============ ============ =======================
0 0 0-0
1 0 0-0
2 1 32-0
3 0 0-0
4 1 32-0
5 1 32-0
6 2 32-32
So it seems the binary CHAR/VARCHAR column are still stored as
null-terminated string. Otherwise, why would record 4 be converted into
32-0 where it should be 0-32? Or is it something that I do wrong? How
can I store nulls inside a non-BLOB column?
--
dave
>>From: "David Garamond"I still can't store pure binary data in CHAR or VARCHAR columns.
>>Why can't CHAR and VARCHAR store binary data (ASCII 0)?
>
> Why do you think you can't ?
> Have you tried OCTETS character set ? (Or its synonym BINARY, if you prefer)
> i.e.
> CHAR(10) CHARACTER SET BINARY NOT NULL
==========================================================================
C:\>isql -u sysdba -p masterkey
create database 'c:\tmp\test.fdb' default character set binary;
/* import udfs ... */
create table t1 (
id int not null primary key,
code varchar(2) not null);
commit;
create unique index idx1 on t1(code);
commit;
/* 1,3,4,5 is rejected due to duplicate value */
insert into t1 values (0,'');
insert into t1 values (1,ascii_char(0));
insert into t1 values (2,ascii_char(32));
insert into t1 values (3,ascii_char(0)||ascii_char(0));
insert into t1 values (4,ascii_char(0)||ascii_char(32));
insert into t1 values (5,ascii_char(32)||ascii_char(0));
insert into t1 values (6,ascii_char(32)||ascii_char(32));
commit;
select
id,
strlen(code),
ascii_val(substr(code,1,1))||'-'||ascii_val(substr(code,2,2)) as code
from t1;
==========================================================================
When I try inserting the records without the unique index, the select
result is:
ID STRLEN CODE
============ ============ =======================
0 0 0-0
1 0 0-0
2 1 32-0
3 0 0-0
4 1 32-0
5 1 32-0
6 2 32-32
So it seems the binary CHAR/VARCHAR column are still stored as
null-terminated string. Otherwise, why would record 4 be converted into
32-0 where it should be 0-32? Or is it something that I do wrong? How
can I store nulls inside a non-BLOB column?
--
dave