Subject | Writing real external files |
---|---|
Author | netzka |
Post date | 2006-09-20T23:30:29Z |
Hello, everyone...
I'm new to this group, and hope you'll be able to help me!
As we got into the conversions world, we (we = my company) began
to "dig into the bits" in order to have better and faster conversion
processes. This week we reached an "almost-great" level - it
would've been great if we didn't have this little issue! Here's the
deal...
Until now, we were using external files filled with CHAR values
(everything became char, were it number, were it numeric, were it
timestamp), and then translated each record into a "real-fielded"
table (with straight field types). That's the everyone's method,
isn't it?!
But then I began to study an external table created by firebird,
reading byte-to-byte (aka reverse engineering), and began to study
the structure of each field type as it's stored! Well, I almost got
it! In fact, I got how the whole structure works (example: varchar
fields have 2 bytes before the value, to inform the size of the
field's value; numeric fields are 8-byte integer fields; dates are 4-
byte; ec.).
I tested with a small (5-fielded) table and it worked pretty well!
But when I got into a bigger table (in which each record [row] have
2.5k), the CHAR(1) fields seem to record extra NULL (#0) characters
after the end of 'em, but this happens only sometimes and I couldn't
realize when, nor how many of #0 should I put after it! So, that's
my question!! What's up with the CHAR(1) fields that, when I have
big-record tables, it fills more bytes than the normal?!
Thanks in advance,
Henrique Netzka
I'm new to this group, and hope you'll be able to help me!
As we got into the conversions world, we (we = my company) began
to "dig into the bits" in order to have better and faster conversion
processes. This week we reached an "almost-great" level - it
would've been great if we didn't have this little issue! Here's the
deal...
Until now, we were using external files filled with CHAR values
(everything became char, were it number, were it numeric, were it
timestamp), and then translated each record into a "real-fielded"
table (with straight field types). That's the everyone's method,
isn't it?!
But then I began to study an external table created by firebird,
reading byte-to-byte (aka reverse engineering), and began to study
the structure of each field type as it's stored! Well, I almost got
it! In fact, I got how the whole structure works (example: varchar
fields have 2 bytes before the value, to inform the size of the
field's value; numeric fields are 8-byte integer fields; dates are 4-
byte; ec.).
I tested with a small (5-fielded) table and it worked pretty well!
But when I got into a bigger table (in which each record [row] have
2.5k), the CHAR(1) fields seem to record extra NULL (#0) characters
after the end of 'em, but this happens only sometimes and I couldn't
realize when, nor how many of #0 should I put after it! So, that's
my question!! What's up with the CHAR(1) fields that, when I have
big-record tables, it fills more bytes than the normal?!
Thanks in advance,
Henrique Netzka