Subject RE: [firebird-support] Migration from FB1.5 dialect 1 to FB2.1 dialect 3
Author Leyne, Sean
Bob,

> I know that there was a migration guide for upgrading a dialect 1 db
> to dialect 3 back when dialect 3 came out. After reviewing the
> changes, it was deemed too costly to upgrade the database, so we have
> kept just upgrading the engine but leaving the dialect as is.

That is the boat that our application is in, as well.


> From my recollection, there were a number of metadata changes that
> need to be made to a DDL extract to accomdate the new datatypes (I can
> remember the difference between the old DATE and new
> DATE/TIME/TIMESTAMP being a significant one).

Another is the difference in the results of division operations.


> A new database would then be created from the modified DDL, and the
> data 'pumped' from the old database to the new one. Have these
> procedures changed since way back then?

Nope.


> Here's the real important question - is there a faster way to do this?

Not that I'm aware of.


> (Currently, a full backup-restore cycle on this database
> takes about 17 hours.

Do you use separate *physical* disks for this?

Is the TMP/TEMP folder on the same physical disk as the restore
database?


> Until I can get my customers to sign off on an
> archiving strategy I have to live with it) I presume that a 'new
> database' conversion is going to take somewhere in the same
> neighborhood, if not longer.

Depending on the approach it could be much longer ;-(

(There are third-party tools which can perform a single-threaded data
pump, but I think you need 'real horsepower')

Depending on the type of server you would be running the pump operation
on, if it has the throughput, you could be pumping several tables at
once.

So, here's how I would handle the process.

Create a new database which has all tables defined but with no Primary,
constraints, triggers or indexes defined. This database should have
Forced Writes = OFF, for maximum throughput (if the pump fails, you
would need to start from scratch anyhow).

Create a simple *multi-threaded* application which has a control loop
which build a list of tables to be pumped, and then proceeds through the
list of tables and assigns a worker thread to:
- connect to the source and target db
- prepares a SELECT * statement for the assigned table on the source
table
- prepares a parameterized INSERT statement on the target database
- executes the SELECT statement
- loops through the SELECT results,
- assigns the parameters to the INSERT and executes same
- commits the transaction every 5,000 rows

Then once all the data have been moved, I would apply a database script
for the balance of the metadata (keys, indexes, constraints, triggers,
SPs, Domains) as well as change the Force Write to ON. (The creation of
the Keys, indexes and constraints could also be done by worker thread,
which would significantly speed up the pump process)

The number of worker threads active at any time, would be based on the
throughput of the server hardware (Duo vs Quad Core, local ATA HD vs 15K
SAS HDs with RAID controller w/write caching active)


Sean