Subject | Re: [ib-support] Re: insert optimization (long) |
---|---|
Author | Erik S. LaBianca |
Post date | 2003-05-28T14:54:20Z |
Svein,
Thanks for the suggestion. I changed it to 10k records and reran, it
inserted 657534 records in 285 seconds (2307/ sec). A 10 second
improvement. Not too shabby.
For interests sake, completely removing the inline commit calls results
in 657534 records in 290 seconds (2267.35862068966 records / sec), so it
seems as if 10k commits is the best.
I'm going to try turning off forced writes, and also run the MySQL
comparison using an InnoDB table.
--erik
Svein Erling wrote:
Erik S. LaBianca <erik@...>
Developer / System Administrator, Interlink, Inc.
Voice: (269) 473-3103 Fax: (269) 473-1190
http://www.totalcirculation.com
Thanks for the suggestion. I changed it to 10k records and reran, it
inserted 657534 records in 285 seconds (2307/ sec). A 10 second
improvement. Not too shabby.
For interests sake, completely removing the inline commit calls results
in 657534 records in 290 seconds (2267.35862068966 records / sec), so it
seems as if 10k commits is the best.
I'm going to try turning off forced writes, and also run the MySQL
comparison using an InnoDB table.
--erik
Svein Erling wrote:
> One thing you could try Erik, is to commit every 10000 records rather--
> than every 1000 records. I don't know if this is enough to equal
> MySQL, but I would expect it to speed up the insertion process.
>
> Set
>
> --- In ib-support@yahoogroups.com, "Erik S. LaBianca" <erik@t...>
> wrote:
>
>> if ($count % 1000 == 999) {
>> $dbh->commit();
>> }
>
>
Erik S. LaBianca <erik@...>
Developer / System Administrator, Interlink, Inc.
Voice: (269) 473-3103 Fax: (269) 473-1190
http://www.totalcirculation.com