Subject | Re: [Firebird-Architect] Compression |
---|---|
Author | Jim Starkey |
Post date | 2011-09-27T14:10:24Z |
On 9/27/2011 7:36 AM, Thomas Steinmaurer wrote:
archive nodes to keep serialized copies of distributed objects.
But, that said, it uses a compression encoding for both user data,
messages, and serialized objects, a technique that evolved from a
suggestion on this list four or five years ago. Taken over a large
database, the encoding was 40% smaller than run length encoding on disk
and 60% smaller than traditional Firebird records in memory.
And it's platform neutral, to boot.
>NuoDB doesn't uses disk like other database systems. Instead, it uses
> Jim,
>
> > I have done a study of blob compression. The question was which is
> > faster, compressing, writing compressed, reading compressed, and
> > decompressing or just writing and reading un compressed.
> >
> > The results for a first generation Athlon for both zlib and lzw were
> > terrible. Jpegs and pdfs are already compressed and were net losses.
> > Text cost more to compress and decompress than writing and reading full
> > size. Bummer.
> >
> > A more database friendly compression algorithm that traded compression
> > for speed might help, but I don't have a candidate. I have some ideas,
> > but no time.
>
> Is NuoDB compressing data before it goes to disk or possibly even more
> important in a distributed system when data goes over the network?
>
archive nodes to keep serialized copies of distributed objects.
But, that said, it uses a compression encoding for both user data,
messages, and serialized objects, a technique that evolved from a
suggestion on this list four or five years ago. Taken over a large
database, the encoding was 40% smaller than run length encoding on disk
and 60% smaller than traditional Firebird records in memory.
And it's platform neutral, to boot.
>[Non-text portions of this message have been removed]
> Did you have a look on Snappy, Googles compression algorithm used in
> their BigTable implementation?
>
> Currently, we are quite happy with it in a Hadoop/HBase prototypical
> cluster installation with 8 nodes.
>
> Regards,
> Thomas
>
> > Founder& CTO, NuoDB, Inc.
> > 978 526-1376
> >
> > On Sep 27, 2011, at 4:05 AM, Thomas Steinmaurer<ts@...
> <mailto:ts%40iblogmanager.com>> wrote:
> >
> >> Hello,
> >>
> >> I don't know if this has been disccused in the past, but is there
> >> anything on the plan for supporting compression at
> page/table/record-level?
> >>
> >> Having less stuff on disk means there is less to read from disk and
> >> ideally, not uncompressed but compressed pages are in the cache. This
> >> also means that more data fits into memory/cache ...
> >>
> >> Well, we are currently investigating various compression options for an
> >> Oracle installation and a whitepaper discusses that CPU overhead for
> >> compression/decompression is minimal etc ...
> >>
> >> Comments, ideas ...?
> >>
> >>
> >> Regards,
> >> Thomas
> >>
> >>
> >> ------------------------------------
> >>
> >> Yahoo! Groups Links
> >>
> >>
> >>
> >
> >
> > ------------------------------------
> >
> > Yahoo! Groups Links
> >
> >
> >
>
>