Subject | Re: Slowdown when processing large tables |
---|---|
Author | Nenad |
Post date | 2010-12-01T13:37:33Z |
--- In firebird-support@yahoogroups.com, "hvlad" <hvlad@...> wrote:
Regards
Nenad
>Given the size of src_table(s) is 40-80 million records, I thought it was a good idea to process in batches and commit after each batch. That will help me estimate total time to process the whole src_table and to stop/resume processing when needed. Is there a better way of doing that?
> Why do you split src_table by this parts ? Every next step (batch) should read and skip again and again already copied rows and more and more at every step.
>
>Thank you. This is really helpful.
> If you think it is absolutely necessary, remember src.d_id, src.u_id values of last copied row, add WHERE clause to the SELECT query, and put remembered values into it to faster skip already copied rows. And, of course, replace from_rec and to_rec parameters by single number of rows to copy.
>
Regards
Nenad