Subject Re: [Firebird-Architect] Re: Record Encoding
Author Jason Dodson
Jim Starkey wrote:
> Jason Dodson wrote:
>
>
>>Using a plugin architecture allows sys admins to say, sacrifice
>>"performance" for better compression if they bare that need.
>>
>>
>>
>
> Performance is a delicate tradeoff between memory usage, cpu usage, and
> disk operations. A less efficient compression algorithm may be faster
> but result in more page reads/writes, so system performance is slower.

That is a blanket statement that isn't necessarily true. LZW for
instance, while not the most efficient, uses a very small footprint in
every regard, and is about as fast as you are gonna get.

> As designers, we should tune for aggregate system throughput, which
> require a careful cost/benefit analysis. A DBA isn't going to have the
> tools, the knowledge, or the time to do the analysis, and if he did, he
> would presumably come to the same conclusion as we did. On the other
> hand, there are always going to be dufuses who don't believe in
> trade-offs who will argue that algorith a is better because it is
> requires few cpu cycles or b is better because it does better
> compression. This is how systems get a reputation for hard to use.
>
> There will also be two schools of thought. One argues that one size
> fits all is intrinsically bad. The other argues that doing it right is
> better than giving poor users a choice of bad alternatives. You might
> note that Firebird a) unlike other databases, has no index tuning
> parameters, and b) Firebird indexes perform better than databases give
> expose index tuning parameters. See also placement control.

You certainly can HAVE it work with default settings set to what you
think is best. This recommendation is for the flexability to change that
behavior if special circumstances arise. Sure, everyone can drive a car,
if it can only turn right, but someone along the lines NEED to turn
left, instead of turning right three times. Maybe someone would like to
be able to turn up into the sky... who knows.

>>I also think a blunder comes along when you address the compression to
>>be done on the client side... if the compression mechanism changes
>>between versions for whatever reason, you will lose backward compatibility.
>>
>>
>
> The compression code is part of the client library and, like other
> things, is negotiated at connection time. If the client doesn't have
> support for the engine algorithm, the data sent uncompressed for server
> to compress.
>

Good.

>>Maybe a better idea would be to have transmission compression and
>>storage compression. The transmission compression can be negotiated on
>>connection (useful for the poor client on a 486, where compression could
>>hurt more then help), and storage compression be whatever the server
>>feels appropriate.
>>
>>
>
> That's the worst of all alternatives. Besides, the 486 belongs either
> in the attic or holding a door open. We should make our tradeoffs for
> contemporary hardware, particularly when a replacement your 486 can be
> had for $300 (shipping included).
>
>

A mentality like that is why we need multi-gigahertz machines with ram
approaching gigs simply to run a consumer OS, browse the web, and check
email. I mean, if we are going to simply recommend "Get a better
machine", then you may as well do this all in Java.

Jason