Subject Re: Record Encoding
Author Roman Rokytskyy
Claudio,

> This is very different than saying "heck, if you want to use blobs,
> we are going to put all blobs in memory, so prepare your budget for
> tons of RAM".

Jim did not suggest to put BLOBs into RAM, or more correctly, not
complete BLOBs. His idea is to compress the database pages that
contain BLOBs and send them compressed over the wire to the client.
Client would then allocate some memory and decompress them there. He
argues that less page reads should be performed, which will increase
the overall performance.

This approach should work fine except when somebody needs to seek
within the BLOB. I need this. This is the only place where Jim
suggested to load BLOB into RAM, decompress it, seek there and return
needed data to the client uncompressed.

Others seem to be hapy with isc_get_segment and isc_put_segment only,
so I do not see any reason for the discussion of hardware prices, with
BLOB compression client library would need only few more kilobytes to
keep decompressed content of the BLOB (it will be usually more than
client has asked for) and a bit more complicated code for
isc_get_segment and isc_put_segment calls in the client library.

Though I agree with you, that it would be better if we have a
possibility to switch the compression off. At least for me.

Roman