Subject | Re: [firebird-support] Help |
---|---|
Author | Ivan Prenosil |
Post date | 2005-06-06T13:19:55Z |
"Alexandre Benson Smith" wrote:
One should not blindly obey every advice from this
(and not only from this) list. People here quite often offer
advices that somebody wrote long time ago without
verifying it theyselves, or without using it in right context.
hence I have no problem to solve :-)
(What can really be slow is rollback, especially with lot of indexes;
if you expect frequent rollbacks, than frequent commits can help.
I think it should be better with FB2)
- whether it really soved some of their problems, or whether
somebody told them to do so (and they simply did not try
to commit only once).
- whether they are sure the problem is not in some connect library
(I remember some library refetched whole table after each insert,
you can imagine the consequences ...)
- whether they do not use some too complext triggers
(e.g. such that update the same data repeatedly)
- whether they do not use some dumb triggers (e.g. with COUNT(*))
- whether they insert data row by row by executing many (preferably prepared)
INSERT ... VALUES ... statements, or many rows at once using
INSERT ... SELECT ... statement (in which case the size of undo-log
can indeed be the problem)
Alexandre, have you personally experienced problems with mass inserts,
or are you just scared in advance without trying it ? :-)
Ivan
> Ivan Prenosil wrote:One should not solve problems that do not exist.
>
>>I have inserted several millions records inside one transaction
>>and have not noticed any slow down (without disabling auto_undo).
>>Perhaps the situation is different if you have huge number of indexes ?
>>
>>Ivan
>>
>>
> People,
>
> Now I am puzzled... :-/
>
> One should or shoudn't commit after a bunch of records ?
One should not blindly obey every advice from this
(and not only from this) list. People here quite often offer
advices that somebody wrote long time ago without
verifying it theyselves, or without using it in right context.
>And it is exactly what I am doing - it works good,
> I prefer if I could start a transaction, import a zillion records commit
> at the end, or rollback in any exception, than to commit in batches...
hence I have no problem to solve :-)
(What can really be slow is rollback, especially with lot of indexes;
if you expect frequent rollbacks, than frequent commits can help.
I think it should be better with FB2)
>You need to ask those who commit frequently why they are doing so
> What could lead to poor performance after 10k records inserts, and why
> Ivan didn't get this penalty ? Ivan, could you please tell me your
> little secret ? :-)
- whether it really soved some of their problems, or whether
somebody told them to do so (and they simply did not try
to commit only once).
- whether they are sure the problem is not in some connect library
(I remember some library refetched whole table after each insert,
you can imagine the consequences ...)
- whether they do not use some too complext triggers
(e.g. such that update the same data repeatedly)
- whether they do not use some dumb triggers (e.g. with COUNT(*))
- whether they insert data row by row by executing many (preferably prepared)
INSERT ... VALUES ... statements, or many rows at once using
INSERT ... SELECT ... statement (in which case the size of undo-log
can indeed be the problem)
Alexandre, have you personally experienced problems with mass inserts,
or are you just scared in advance without trying it ? :-)
Ivan