Subject | Re: Gigabytes of Inserts |
---|---|
Author | christian_stengel |
Post date | 2005-03-02T13:24:18Z |
Hi,
- external tables
- a program with c api
Both are very fast - with external tables you only have to program the export program, but
with the c api you have the choice, that you can modify the data on the fly.
when doing external tables, you have the following limitations:
- NULL fields and blob fields are not supported
With external tables, you should use the binary representation of varchar values (they are
stored in a struct).
I personally prefere the c-api programm - it's a little bit more complicate, but it's worth.
I had a table with 15 columns and did a commit after 100.000 records. The 100.000
records were processed in about 10-20 seconds on a P4 3 MHz.
is definitly slower unless you use a gigabit connection.
deactivate all indices, and reactive them after the import is done.
and prepared insert statements.
Do the stuff on a unix machine with classic server - this is the fastest option. You can also
tweak some settings in firebird.conf - eg. MaxUnflushedWrites
Chris
>You have 2 choices:
> In order to help speed it along I was planning to do a commit every
> 1000 inserts or so. Is this what is meant by a "bulk insert" - just
> not doing a commit too frequently?
- external tables
- a program with c api
Both are very fast - with external tables you only have to program the export program, but
with the c api you have the choice, that you can modify the data on the fly.
when doing external tables, you have the following limitations:
- NULL fields and blob fields are not supported
With external tables, you should use the binary representation of varchar values (they are
stored in a struct).
I personally prefere the c-api programm - it's a little bit more complicate, but it's worth.
I had a table with 15 columns and did a commit after 100.000 records. The 100.000
records were processed in about 10-20 seconds on a P4 3 MHz.
> I was planning to zip the completed txt files up and ftp them to theYou should do this. Do not run the programm locally and connect to a remote server - this
is definitly slower unless you use a gigabit connection.
> Also, I noticed when running the SQL script file in EMS IB Managerthe problem here is, that the indices get out of bound - so before inserting the data,
> that the inserts happen very quickly at the start of the file, but
> that they slow considerably near the end of a long script (5 or 10 MB
> in size). Would it be quicker overall if I worked with smaller script
> files (say a megabyte or so)?
deactivate all indices, and reactive them after the import is done.
> Can you run a script file via isql or some other similar program thatisql -i file_to_insert_data.sql will do the insert - but it's not as fast as if you use the c-api
> comes with firebird?
and prepared insert statements.
Do the stuff on a unix machine with classic server - this is the fastest option. You can also
tweak some settings in firebird.conf - eg. MaxUnflushedWrites
Chris