Subject RE: [firebird-support] TcpRemoteBufferSize
Author Alan McDonald
> We use IBReplicator for replication over the internet and we have
> massive performance problems with that. For example we have a
> transferrate of 200 Bytes/sec over ISDN and 3KBytes/sec over DSL. One of
> the network admins told me, that the problem is, that FB sends a lot of
> very small packets over the wire, for example for replicating about 4MB
> nearly 31000 packets are sent with a average size of 80 Bytes. I have
> seen in the firebird.conf is a parameter TcpRemoteBufferSize to enlarge
> the packet size and I have set it to 16K, but it doesn't changed
> anything. The maximum packet size I get is 1514 Bytes. The I found this
> article to enlarge the TCP packet size
>, but his doesn't
> worked for me as well. I'm no network specialist and I don't get this
> all together, therefore here are my questions:
> Is it possible with the parameter TcpRemoteBufferSize to reduce the
> number of packets sent over the wire? For example, don't send 1000
> packets with a size of 80 Bytes, but send 1 packet with a size of 8000
> Byte.
> When I want to enlarge the TCP packet size, do I have to enlarge it for
> the server and the client or only for one of them. Does anybody know a
> good TCP packet size for a TcpRemoteBufferSize of 16K?
> I have read, that FB is not intended for use in Internet, because the
> FB-protocol does so many roundtrips. Is this right, or exist some
> tricks, how to speed up FB over the internet?
> BTW, we have also tried ZeBeDee, but it doesn't helped anything. It only
> compresses the the packets, but the number of packets are still the
> same.
> Regards
> Guido

There's a very good reason for keeping packets small. And that's the cost of
re-transmission. Small packets are faster to re-transmit. They are also good
in that they tie up switches for only short periods of time which allows for
far greater redundancy. (other people get a look in)
Now I know there is much talk about reducing the wire protocol for FB but
you can be experiencing something which has little to do with the packet
I have replication taking place over a cable connection (not too much
different in band width from DSL) and it is very fast.
I have also experienced total failure of replication over a private network
and the reason could shed some light on your possible problem.
The network was converted to a TPIPS nework. FB traffic has the "don't
separate" bit set to true but this TPIPS network was set to ignore that and
it continued to separate the packets down variable routes. Now switching
nework have a habit of wanting to do that but they are also kind to those
who ask it not to. My problem was solved when TELSTRA switched the hardware
back to obeying the request not to separate.
Now it may be that separation over small distances can be accomodated by the
FB socket connection and that my more disparate one could not handle it.
It may also be that you are experiencing a limit to your socket connections
and that not separating the traffic could also solve it.
I'm not a network engineer. But I would be tempted (if you can) to try your
replication process across another line/network. It sounds like even a
direct 56K modem line between your stations might be better that what you
have. But you should be getting better performance without having to worry
about changing the size of your packets, IMHO I think you are looking at the
wrong thing.