Subject RE: [IB-Conversions]Fox to Interbase with IBO/IB_WISQL?
Author Claudio Valderrama C.
> -----Original Message-----
> From: Joe Fay [mailto:joefayjr@...]
> Sent: MiƩrcoles 12 de Julio de 2000 9:51
>
> Just a couple of notes and questions:
>
> 1. There is a difference between Fox 2.6 and DBase data table systems --
> the indices are different. I've gotten burned with Delphi's datapump
> before

I think that unfortunately, the new db, to be created in a decent way,
cannot be created by the DataPump. Maybe there's an alternative: let
datapump move the db from XYZ to an automatic DB then create your own
"decent" db with an script and copy from the automatic db.


> many of the 'nice' things about DBase, Clipper, and FoxPro was that
> things like triggers were not 'encapulated'...they were called 'WHEN' and
> 'VALID' clauses. These will have to be re-done by hand after the
> conversion.

Depending on their complexity, of course. Basic VALID clauses can be
implemented in an example like this:

create table tbl (
...
fieldZ int not null check(fieldZ between 0 and 10)
...
);

or if you find that your original db has field that repeat the same pattern,
then you'll do:

create domain dom_Z int not null check(value between 0 and 10);
and then define

create table tbl (
...
fieldZ domZ
...
);
and you even can add more checks, for example
fieldZ domZ check(value <> 5)
so you end up with a value that can be in the range
0..4 and 6..10.



> 2. In the dim past, I was a dumb, if happy proliferator of XBase tables
> and applications.

In the past, almost all of us followed this path for moderately small
applications.


> So almost everything I do will be a conversion of
> mostly FoxPro tables, and there are plenty of BLOB (read memo) fields,
> both text based,and binary. So whatever paradigm I come up with MUST
> include a reliable way to preserve the history contained in those BLOBS.

Ok, if the BDE can read a BLOB from a FoxPro table, then you can dump that
field to the target db. Assuming you created the structure of the new db,
you can move all "finite" fields with the method you like and after, you
would need something like this:
- read the PK + the BLOB from the source
- do an update on the target, filling the BLOB with the read value in the
same PK value of the target than in the source.


> a. created a routine to add delimiters of my choice to the
> beginning
> and end of each memo, and place the record number from which the memo was
> extracted at the head of the file.

Well, if your original BLOBs will be only text, then there's no problem. If
you keep genuine binary information, then things need another path of
solution.


> Thanks you all for the suggestions. I wear my thoughts and emotions on
> my shirt sleeve, so, foolish or not, you'll always know where I'm at,
> what trouble I'm having, and what I'm thinking. Good to be here.
>
> Joe Fay.

Ok, the idea is to make the data travel as strighforward as possible. But I
really think that one must:
- Define the target field with criterion. Example: maybe you used a float
field because you couldn't get an exact precision in your original format.
In that case, datadump would create a float field, but in IB you are better
selecting the numeric(x,y) that best suits your needs.
- Try to find common declarations and convert them to domains. Less
complexity.
- Letting the engine to manage automatically for you as many low-level
issues as possible. There are domains, check constraints, referential
constrainsts, triggers, procedures, selectable procedures, views, UDFs and
SQL rights.

C.