Subject | Re: [firebird-support] Help |
---|---|
Author | Alexandre Benson Smith |
Post date | 2005-06-10T14:19:23Z |
Ivan Prenosil wrote:
used IBDataPump for cross database migration, and I used the auto commit
after 1000 records sometimes increased it to 5000 or so.
I really don't remember if I got problems, but I have assumed (yes
without experienced it myself) that if I don't commit I will have
performance problems. Next time I will do an import I will try it in a
single batch, and then again with 1000 records batches, and will report
back to the list with my personal results.
See you !
--
Alexandre Benson Smith
Development
THOR Software e Comercial Ltda.
Santo Andre - Sao Paulo - Brazil
www.thorsoftware.com.br
--
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.323 / Virus Database: 267.6.6 - Release Date: 08/06/2005
>"Alexandre Benson Smith" wrote::-)
>
>
>>Ivan Prenosil wrote:
>>
>>
>>
>>>I have inserted several millions records inside one transaction
>>>and have not noticed any slow down (without disabling auto_undo).
>>>Perhaps the situation is different if you have huge number of indexes ?
>>>
>>>Ivan
>>>
>>>
>>>
>>>
>>People,
>>
>>Now I am puzzled... :-/
>>
>>One should or shoudn't commit after a bunch of records ?
>>
>>
>
>One should not solve problems that do not exist.
>One should not blindly obey every advice from this
>(and not only from this) list. People here quite often offer
>advices that somebody wrote long time ago without
>verifying it theyselves, or without using it in right context.
>
>
>
>>I prefer if I could start a transaction, import a zillion records commitUnderstood
>>at the end, or rollback in any exception, than to commit in batches...
>>
>>
>
>And it is exactly what I am doing - it works good,
>hence I have no problem to solve :-)
>
>(What can really be slow is rollback, especially with lot of indexes;
>if you expect frequent rollbacks, than frequent commits can help.
>I think it should be better with FB2)
>
>
>
>>What could lead to poor performance after 10k records inserts, and why:-)))
>>Ivan didn't get this penalty ? Ivan, could you please tell me your
>>little secret ? :-)
>>
>>
>
>You need to ask those who commit frequently why they are doing so
>- whether it really soved some of their problems, or whether
> somebody told them to do so (and they simply did not try
> to commit only once).
>- whether they are sure the problem is not in some connect library
> (I remember some library refetched whole table after each insert,
> you can imagine the consequences ...)
>
>
>- whether they do not use some too complext triggersI can't remember how I did it in the past, for the last few years I have
> (e.g. such that update the same data repeatedly)
>- whether they do not use some dumb triggers (e.g. with COUNT(*))
>- whether they insert data row by row by executing many (preferably prepared)
> INSERT ... VALUES ... statements, or many rows at once using
> INSERT ... SELECT ... statement (in which case the size of undo-log
> can indeed be the problem)
>
>Alexandre, have you personally experienced problems with mass inserts,
>or are you just scared in advance without trying it ? :-)
>
>
>
used IBDataPump for cross database migration, and I used the auto commit
after 1000 records sometimes increased it to 5000 or so.
I really don't remember if I got problems, but I have assumed (yes
without experienced it myself) that if I don't commit I will have
performance problems. Next time I will do an import I will try it in a
single batch, and then again with 1000 records batches, and will report
back to the list with my personal results.
>Ivanthank's
>
>
>
>
>
See you !
--
Alexandre Benson Smith
Development
THOR Software e Comercial Ltda.
Santo Andre - Sao Paulo - Brazil
www.thorsoftware.com.br
--
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.323 / Virus Database: 267.6.6 - Release Date: 08/06/2005