Subject Re: [IBO] Reduce traffic over network
Author Geoff Worboys
Hi Thomas,

> I'm also looking for some way to optimize a batch processing
> routine.

My first thought is whether you have reason to think that your
particular problem is related to the amount of traffic over the
network... if that is why you responded on this thread rather
than create something separate.

> [...] The interesting thing is, with more records
> being in the result set, the throughput decreases.

This is an interesting result but I am not certain I fully
understand - and what, if any, relevance does the Comm.Interval
have to the discussion?

You have some batch of operations to perform so you execute the
operations measure the total amount of time it takes and then
divide the operations by the time - is that about it?

> Discussion/ideas appreciated.

> I'm still on 4.8.7 with D2006 in that project. Connect string
> is using TCP/IP, although I'm local on the server with all
> that stuff.

It could be interesting to compare the results with different
communications types - esp. local vs tcpip.

The other thing I was wondering was whether you could modify
your DML processing to produce SQL statements to a script file
and then try to execute that script using isql - timing the
execution and getting the rate, see how that compares.

In fact the idea of not processing the DML may also reveal
whether the problem is on the read or write side of the process
(although my guess would be write problems not a read problem).

What I am trying to get at here is isolating where the issue
seems to be coming from. Client vs server, network vs disk.

If you look at the flattening curve of the graph made from
your figures it makes me think this is somehow related to a
write cache of some sort. Possibilities worth looking at:

. double-check the database is using forced writes

. if you are currently testing on a virtual machine try
doing it on a normal machine (measuring disk performance
on a VM can be a very "interesting" experience)

. see if you can disable write caching on your hard-drive

. perhaps defragment the drive, and if at all possible
try to put the two database files close together

. try different buffer settings for the Firebird server

I cant really see the problem being client-side unless there
is some sort of memory growth/leak. Is there a chance that
your client is allocating more and more memory as the batch
size gets larger - perhaps causing more swapping. It could be
that such memory is all properly released when the batch is
finished - just check what happens while the batch is running.

The only other client side effect I can think of would be as
the client the effects the server because they are both on the
same hardware in this case - forcing the disk heads to move
about more. That would probably be the direction of my other
tests - separate the server onto different hardware and see
what happens.

Probably not much here you have not already considered... but
best I can come up with right now.

Geoff Worboys
Telesis Computing