Subject | Re: [IB-Architect] Suppress whitespace in transmit buffers |
---|---|
Author | Jim Starkey |
Post date | 2000-05-10T13:51:29Z |
At 07:33 PM 5/9/00 -0700, Bill Karwin wrote:
On disk, the tail of a varchar is zapped to make sure than the
tail is handled efficiently by the run length encoding compression
schema.
On the wire, there are three cases.
1. For blr level requests (isc_send, isc_receive) using
heterogenous protocol, individual data items are sent.
The tail of a varchar is not transmitted.
2. For blr level requests using homogenous protocol (both
ends of same platform), the message as a single unit.
The tail of a varchar is transmitted.
3. For server base dsql (current implementation), I believe
individual items are sent, meaning (I hope) that the
tail of a varchar is not transmitted.
I considered and decided against transport level compression
for the following reason:
1. Network communications is basically dma in, dma out,
so there is very little per byte processor involvement.
The cost is primarily in the number of transmissions,
not the size of the transmission.
2. On the wire compression is best handled in hardware.
The second best place to put it is in the link level
protocol. An implementation at application level would
likely fight or defeat a lower level implementation.
If neither the hardware nor protocol guys cared, then
most likely #1 is correct.
As it turns out, neither the protocol guys or hardware guys
ever got around to on the wire compression (as far as I know).
Processors have gotten so fast the worrying about the cost
of compression is a waste of time. On the other hand, communication
speed is way up, which reduces the benefit of on the wire
compression.
I'm inclined the think that a computationally cheap (like
run length encoding) compression scheme would give some
performance improvements in some applications, but probably
wouldn't be noticable at a system level. The two phase
connect nature of the remote protocol would make it easy
to introduce if somebody wanted to do it. But I think there
are probably more effective ways to use the effort.
Jim Starkey
>> From: "Jason Wharton" <jwharton@...>Here are some answers.
>
>I conclude that Varchars aren't padded with spaces either in storage or in
>the XSQLVAR. I haven't bothered to do a TCP sniff, but I assume that the
>server does not pad spaces onto the Varchar and then strip them off in the
>client.
>
On disk, the tail of a varchar is zapped to make sure than the
tail is handled efficiently by the run length encoding compression
schema.
On the wire, there are three cases.
1. For blr level requests (isc_send, isc_receive) using
heterogenous protocol, individual data items are sent.
The tail of a varchar is not transmitted.
2. For blr level requests using homogenous protocol (both
ends of same platform), the message as a single unit.
The tail of a varchar is transmitted.
3. For server base dsql (current implementation), I believe
individual items are sent, meaning (I hope) that the
tail of a varchar is not transmitted.
I considered and decided against transport level compression
for the following reason:
1. Network communications is basically dma in, dma out,
so there is very little per byte processor involvement.
The cost is primarily in the number of transmissions,
not the size of the transmission.
2. On the wire compression is best handled in hardware.
The second best place to put it is in the link level
protocol. An implementation at application level would
likely fight or defeat a lower level implementation.
If neither the hardware nor protocol guys cared, then
most likely #1 is correct.
As it turns out, neither the protocol guys or hardware guys
ever got around to on the wire compression (as far as I know).
Processors have gotten so fast the worrying about the cost
of compression is a waste of time. On the other hand, communication
speed is way up, which reduces the benefit of on the wire
compression.
I'm inclined the think that a computationally cheap (like
run length encoding) compression scheme would give some
performance improvements in some applications, but probably
wouldn't be noticable at a system level. The two phase
connect nature of the remote protocol would make it easy
to introduce if somebody wanted to do it. But I think there
are probably more effective ways to use the effort.
Jim Starkey