Subject | Re: [Firebird-Architect] Compression |
---|---|
Author | Thomas Steinmaurer |
Post date | 2011-09-27T18:21:34Z |
>> A starting point:Oh, yes.
>>
>> http://www.dba-oracle.com/oracle11g/sf_Oracle_11g_Data_Compression_Tips_for_the_DBA.html
>
>
> Wow! Just got into the history of compression and was stunned t learn that
> DB2 invented record level compression as early as 2006. "Standing on each
> other's feet", indeed.
> OK, here's my summary.Yes, probably. In case of Hadoop/HBase, we are talking about chunks of
>
> Record level compression. We do it, InterBase did it in 1985, Rdb/ELN did
> it in 1982. The algorithm is primitive, but fast. Jim's got another
> algorithm that compresses 40% more which was described on this list a couple
> years ago. Snappy may be another choice, but may depend on having larger
> and more consistent sample than a typical record.
64MB uncompressed data or in production systems even larger.
> Page level compression. Oracle does it, InnoDB does it. It would appear toGood question. I don't know. Perhaps storing meta information in an
> make random access harder. Certainly for a disk system where you need to
> read and write fixed size blocks, compressing pages to odd lengths seems
> like an odd choice. Questions: how do you find a compressed page on disk?
uncompressed area?
> How do you prevent fragmentation when you need to write back a page thatAnother good question I don't have an answer. ;-) Possibly some kind of
> has grown or shrunk?
(automatic) reorganisation/compaction.
Regards,
Thomas