Subject | Re: [IBO] Reduce traffic over network |
---|---|
Author | Thomas Steinmaurer |
Post date | 2010-05-07T18:53:26Z |
Hi!
I'm also looking for some way to optimize a batch processing routine.
Basically this iterates through on ordered result set (order by primary
key ASC) from the beginning to the end via a TIB_Curser and
TIB_Curser.APINext and per record, it builds a DML statement which then
gets executed on another database. The executed statement can vary with
each record in the iterated result set. Basically think about of having
a replication process, which applies logged changes to another database.
Iterating through the result set is done with a read commit read-only
transaction and DML stuff is done in context of a distributed
transaction. I also have a configurable commit interval for the
distributed DML transaction. The interesting thing is, with more records
being in the result set, the throughput decreases.
Test scenario.
Comm.Interval # operations Rate
3000 3410 119
3000 4573 113
3000 3169 145
3000 4177 119
3000 3349 125
1000 3950 124
100 3809 130
500 4081 126
500 3544 136
500 3870 121
500 14003 57
500 12357 65
500 8241 85
500 7773 78
500 7574 76
500 4457 112
500 2809 146
500 3341 148
500 372 111
500 3232 138
500 3330 141
500 3948 127
As you can see, records/second (rate) is decreasing if the number of
records in the iterating result set increases.
Regarding DML stuff, I use a regular TIB_DSQL.Execute statement.
Is there something I could watch out at e.g. TIB_Cursor level or e.g.
use the ExecuteDML method for DML execution etc ...
Discussion/ideas appreciated.
I'm still on 4.8.7 with D2006 in that project. Connect string is using
TCP/IP, although I'm local on the server with all that stuff.
Thanks.
--
Best Regards,
Thomas Steinmaurer
LogManager Series - Logging/Auditing Suites supporting
InterBase, Firebird, Advantage Database, MS SQL Server and
NexusDB V2
Upscene Productions
http://www.upscene.com
My blog:
http://blog.upscene.com/thomas/
I'm also looking for some way to optimize a batch processing routine.
Basically this iterates through on ordered result set (order by primary
key ASC) from the beginning to the end via a TIB_Curser and
TIB_Curser.APINext and per record, it builds a DML statement which then
gets executed on another database. The executed statement can vary with
each record in the iterated result set. Basically think about of having
a replication process, which applies logged changes to another database.
Iterating through the result set is done with a read commit read-only
transaction and DML stuff is done in context of a distributed
transaction. I also have a configurable commit interval for the
distributed DML transaction. The interesting thing is, with more records
being in the result set, the throughput decreases.
Test scenario.
Comm.Interval # operations Rate
3000 3410 119
3000 4573 113
3000 3169 145
3000 4177 119
3000 3349 125
1000 3950 124
100 3809 130
500 4081 126
500 3544 136
500 3870 121
500 14003 57
500 12357 65
500 8241 85
500 7773 78
500 7574 76
500 4457 112
500 2809 146
500 3341 148
500 372 111
500 3232 138
500 3330 141
500 3948 127
As you can see, records/second (rate) is decreasing if the number of
records in the iterating result set increases.
Regarding DML stuff, I use a regular TIB_DSQL.Execute statement.
Is there something I could watch out at e.g. TIB_Cursor level or e.g.
use the ExecuteDML method for DML execution etc ...
Discussion/ideas appreciated.
I'm still on 4.8.7 with D2006 in that project. Connect string is using
TCP/IP, although I'm local on the server with all that stuff.
Thanks.
--
Best Regards,
Thomas Steinmaurer
LogManager Series - Logging/Auditing Suites supporting
InterBase, Firebird, Advantage Database, MS SQL Server and
NexusDB V2
Upscene Productions
http://www.upscene.com
My blog:
http://blog.upscene.com/thomas/
> My ordering Items and Links were in place. All I did was set FetchWholeRows = False as you recommended, with almost instantaneous results, no longer the status message informing that the query is still fetching data. LOTS!!!!! less traffic over the network!!
>
> Adrian
> Jason recommended:
> Adrian,
>
> You can use the FetchWholeRows = false property in conjunction with the
> OrderingItems and OrderingLinks properties to set this up such that IBO will
> refine what records are brought to the client along the lines you are
> looking for. This is called dataset refinement. It optimizes things based on
> navigational commands such as First and Last as well as Locate() when search
> criteria is the same as the OrderingLink columns being used.
>
> Please search the archives on these various topics as much has already been
>
>
>
> [Non-text portions of this message have been removed]
>
>
>
> ------------------------------------
>
> ___________________________________________________________________________
> IB Objects - direct, complete, custom connectivity to Firebird or InterBase
> without the need for BDE, ODBC or any other layer.
> ___________________________________________________________________________
> http://www.ibobjects.com - your IBO community resource for Tech Info papers,
> keyword-searchable FAQ, community code contributions and more ! Yahoo! Groups Links
>
>
>
>
>