|Subject||Re: [Firebird-Architect] Compression|
>Wow! Just got into the history of compression and was stunned t learn that
> A starting point:
DB2 invented record level compression as early as 2006. "Standing on each
other's feet", indeed.
OK, here's my summary.
Record level compression. We do it, InterBase did it in 1985, Rdb/ELN did
it in 1982. The algorithm is primitive, but fast. Jim's got another
algorithm that compresses 40% more which was described on this list a couple
years ago. Snappy may be another choice, but may depend on having larger
and more consistent sample than a typical record.
Page level compression. Oracle does it, InnoDB does it. It would appear to
make random access harder. Certainly for a disk system where you need to
read and write fixed size blocks, compressing pages to odd lengths seems
like an odd choice. Questions: how do you find a compressed page on disk?
How do you prevent fragmentation when you need to write back a page that
has grown or shrunk?
Table level compression works only for systems that store tables
>[Non-text portions of this message have been removed]