Subject Re: [Firebird-Architect] Re: Record Encoding
Author Arno Brinkman
Hi All,

<snip>
> I originally had in mind using blob filters for compression, but that
> really hasn't panned out. On the other hand, blob compression is a good
> thing, and almost always useful (the exception is where it is already
> compressed). It does take cycles to compress and decompress, but cycles
> are increasing much faster than disk bandwidth.
>
> The more I think about, the more I think that blobs should always be
> stored compressed, but that compression/decompression be performed
> primarily on the client side. The implication is that the plumbing
> needs to be compression aware at least at the lower levels of the API.
> If we're going to build in end to end compression, we pretty much need
> to standardize on a single compression schema, and zlib is the obvious
> candidate.

enum DataStreamCode
{
edsNull = 1,

edsIntMinus10 = 10, // literal integers -10 to 31
...
edsCompression = 180,
...
}

if the edsCompression is seen it's decompressed and calls the decode procedure recursive.
edsCompression is followed by an compression-type and the needed info for that compression type.
With this the encoded-form can travel from client <-> server or whatever.

Compression can be done in packets of X-bytes so the seek for Roman is still possible and searching
through the BLOB-data doesn't need that the complete BLOB-data is expanded.

We use our own compression to speed up blob-data transfers, but i prefer to see this handled by
Firebird.

Regards,
Arno Brinkman
ABVisie

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Firebird open source database (based on IB-OE) with many SQL-99 features :
http://www.firebirdsql.org
http://www.firebirdsql.info
http://www.fingerbird.de/
http://www.comunidade-firebird.org/

Support list for Interbase and Firebird users :
firebird-support@yahoogroups.com

Nederlandse firebird nieuwsgroep :
news://newsgroups.firebirdsql.info