Subject Re: Slowdown when processing large tables
Author Nenad
--- In firebird-support@yahoogroups.com, "hvlad" <hvlad@...> wrote:

>
> Why do you split src_table by this parts ? Every next step (batch) should read and skip again and again already copied rows and more and more at every step.
>

Given the size of src_table(s) is 40-80 million records, I thought it was a good idea to process in batches and commit after each batch. That will help me estimate total time to process the whole src_table and to stop/resume processing when needed. Is there a better way of doing that?

>
> If you think it is absolutely necessary, remember src.d_id, src.u_id values of last copied row, add WHERE clause to the SELECT query, and put remembered values into it to faster skip already copied rows. And, of course, replace from_rec and to_rec parameters by single number of rows to copy.
>

Thank you. This is really helpful.

Regards
Nenad