Subject Re: [firebird-support] Committing in a Stored Procedure
Author Helen Borrie
At 10:29 PM 25/03/2004 +0000, you wrote:
>I create an external table for importing data then
>I create a stored procedure to import data from the external table
>because I have to validate some of the values in the external table
>I am using:
>
>for select [fields] from ext_tbl
>do
>begin
> /* validate and add or update here */
>end
>
>then I drop the stored procedure and drop the external table
>
>I have to do this on about 450 files, the biggest file being over
>4MB and having over 7500 rows to import.
>
>Is this the best way to do this sort of import?

No. Creating and dropping stored procedures on the fly is not good
stuff. SPs in Firebird are compiled objects, not scripts.

>Is there another way that is faster?

A stored proc is fine for this task. But a stored proc is a persistent
object. Write the stored proc and execute it from the client.

>Should I be committing in the stored procedure every couple hundred
>inserts/updates?

Depends on the size of the rows and other things. Every 8000 rows would be
a reasonable batch size to consider with a ~530-byte row size. Return some
kind of unique combination of values from the procedure to the client,
identifying the first row in the "next" batch, to enable the client to
target the first row of the batch each time. At worst, the key will be the
values of every column in that "next" row.

>and if so how do I commit in a stored procedure?

You don't. A SP is executed inside an atomic transaction context. Its
work is committed when the client commits the transaction (or rolled back
when the client rolls back the transaction).

You need to set up so the SP takes the input keys, executes e.g. 8000
inserts from that point and then ends by sending the next "key" back to the
client. The client hard-commits and then starts a new transaction for the
next batch.

Don't index the target table.

/heLen