Subject | Re: [Firebird-Architect] Compression |
---|---|
Author | Jim Starkey |
Post date | 2011-09-27T11:17:19Z |
I have done a study of blob compression. The question was which is
faster, compressing, writing compressed, reading compressed, and
decompressing or just writing and reading un compressed.
The results for a first generation Athlon for both zlib and lzw were
terrible. Jpegs and pdfs are already compressed and were net losses.
Text cost more to compress and decompress than writing and reading full
size. Bummer.
A more database friendly compression algorithm that traded compression
for speed might help, but I don't have a candidate. I have some ideas,
but no time.
Founder & CTO, NuoDB, Inc.
978 526-1376
faster, compressing, writing compressed, reading compressed, and
decompressing or just writing and reading un compressed.
The results for a first generation Athlon for both zlib and lzw were
terrible. Jpegs and pdfs are already compressed and were net losses.
Text cost more to compress and decompress than writing and reading full
size. Bummer.
A more database friendly compression algorithm that traded compression
for speed might help, but I don't have a candidate. I have some ideas,
but no time.
Founder & CTO, NuoDB, Inc.
978 526-1376
On Sep 27, 2011, at 4:05 AM, Thomas Steinmaurer <ts@...> wrote:
> Hello,
>
> I don't know if this has been disccused in the past, but is there
> anything on the plan for supporting compression at page/table/record-level?
>
> Having less stuff on disk means there is less to read from disk and
> ideally, not uncompressed but compressed pages are in the cache. This
> also means that more data fits into memory/cache ...
>
> Well, we are currently investigating various compression options for an
> Oracle installation and a whitepaper discusses that CPU overhead for
> compression/decompression is minimal etc ...
>
> Comments, ideas ...?
>
>
> Regards,
> Thomas
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>