Subject Re: [IBO] Conversion from dBase to Firebird using IBO
Author Jason Wharton
What she is recommending is that you have a stored procedure that has input
parameters that match the tables in the source database. This makes it
really fast and easy to simply pump the data through stored procedures. It
makes it so that you have a layer of control having it in stored procedures
too.

Jason Wharton
CPS - Mesa AZ
http://www.ibobjects.com

-- We may not have it all together --
-- But together we have it all --


----- Original Message -----
From: <Cmich22@...>
To: <IBObjects@yahoogroups.com>
Sent: Thursday, January 02, 2003 12:17 PM
Subject: Re: [IBO] Conversion from dBase to Firebird using IBO


> That sounds right if I was updating within the same database, but I
> am going from a dBase (Paradox) with one table structure to a
> Firebird database with a different structure. I am not sure how I
> could use a stored procedure in this case other than to have an
> insert statement with a bunch of params.
>
> Does this shed any new light?
>
> --- In IBObjects@yahoogroups.com, "Lindsay Gauton" <lgauton@e...>
> wrote:
> > One of my major systems is updating 3 million+ records everyday. The
> > only way to do it is via stored procedures - any other method would
> just
> > take too long
> >
> > Thanks
> >
> > Lindsay
> >
> > -----Original Message-----
> > From: chris_michalec <Cmich22@a...> [mailto:Cmich22@a...]
> > Sent: 02 January 2003 15:50
> > To: IBObjects@yahoogroups.com
> > Subject: [IBO] Conversion from dBase to Firebird using IBO
> >
> > I have a conversion utility that is taking data from dbase tables
> > into a firebird gdb. The table in question has about 300,000
> records
> > and over 100 fields. I am using IB_Scripts and posting every
> couple
> > thousand records. Initially it starts off fast, but after about
> > 100,000 records or so, it slows to a crawl. This puts the
> conversion
> > utility at finishing the table in about 3 days (which is just too
> > long). I have the scripts posting in their own transaction that I
> > create each time and commit. I have also tried committing the main
> > physical transaction (which in my understanding from reading the
> > documentation should actually be taken care of on it is own, but
> hey
> > I thought I'd try). I thought maybe it was the BDE trying to
> scroll
> > the something that huge, so I called FetchAll on the dataset to
> just
> > pull back everything. The machine this is running should have more
> > than enough resources to handle this. Any other thoughts out there
> > that maybe I haven't tried?
> >
> > Thanks for the help