Subject | RE: [IBO] Interbase / WAN |
---|---|
Author | Jason Wharton |
Post date | 2004-11-16T18:31:17Z |
I believe you are better off using a third party data vehicle between your
middle tier and your client. You would use IBO in the middle tier and then
package up the data from there for delivery to the client.
I don't think Firebird should be used client/server over a WAN. It's just
not designed for that kind of usage. Even with things that try and compress
the transmission, Firebird is still going to be sending many more packets
than would otherwise be necessary due to how it bundles things up for
network delivery.
In short, what's the difference between 100 network packets at full capacity
and another 100 network packets carrying the same data each compressed and
reduced in size by 80%? You are still sending 100 packets across the wire.
Now, I suppose it might be possible for a network compression tool to
intercept network packets, reconstitute a data stream and then compress the
entire stream and reduce the number of network packets to cross the wire,
but I fail to see how it would know to combine individual fetch operations
into a consolidated network packet.
Another approach is to use the DML caching stuff to maintain carefully
updated buffered datasets on the client, but this would only be efficient
under certain usage patterns. We would need to know more about your
situation to advise here.
Regards,
Jason Wharton
www.ibobjects.com
middle tier and your client. You would use IBO in the middle tier and then
package up the data from there for delivery to the client.
I don't think Firebird should be used client/server over a WAN. It's just
not designed for that kind of usage. Even with things that try and compress
the transmission, Firebird is still going to be sending many more packets
than would otherwise be necessary due to how it bundles things up for
network delivery.
In short, what's the difference between 100 network packets at full capacity
and another 100 network packets carrying the same data each compressed and
reduced in size by 80%? You are still sending 100 packets across the wire.
Now, I suppose it might be possible for a network compression tool to
intercept network packets, reconstitute a data stream and then compress the
entire stream and reduce the number of network packets to cross the wire,
but I fail to see how it would know to combine individual fetch operations
into a consolidated network packet.
Another approach is to use the DML caching stuff to maintain carefully
updated buffered datasets on the client, but this would only be efficient
under certain usage patterns. We would need to know more about your
situation to advise here.
Regards,
Jason Wharton
www.ibobjects.com