Subject | Re: [Firebird-Architect] Compression |
---|---|
Author | Alexander Peshkov |
Post date | 2011-09-27T09:04:21Z |
В Вт., 27/09/2011 в 10:45 +0200, Dimitry Sibiryakov пишет:
physical block from disk.
known before, use of any decompress algorythm except RLE is slower yhan
reading data from disk. This is the primary reason why we still use RLE.
> 27.09.2011 10:33, Alexander Peshkov wrote:This does not work as expected - with NTFS-compressed file we can't read
> > Must say that for example NTFS supports writing compressed pages to disk
> > and does not fill it with 0, instead puts more data to the page.
>
> Well, when you tell that, I think that we don't need to duplicate this functionality
> and everybody who want database file to be compressed can just turn NTFS compression on.
>
physical block from disk.
> > WeNo. Compressing whole page can be more efficient than record by record.
> > probably can do something like this - when adding new record/version to
> > the page compress it and when result does not fit, use another page for
> > that record/version.
>
> Isn't it exactly the way the engine already use? AFAIK, if compressed record doesn't
> fit to free space on the primary page, it is chained to other page.
>
> Well, we are currently investigating various compression options forBTW, can someone provide a link to that white papers. As far as I've
> an
> Oracle installation and a whitepaper discusses that CPU overhead for
> compression/decompression is minimal etc ...
known before, use of any decompress algorythm except RLE is slower yhan
reading data from disk. This is the primary reason why we still use RLE.