Subject Re: Record Encoding
Author Roman Rokytskyy
> I'm not going to argue that there aren't applications where UDFs or
> blob filters or embedded Java or stored procedures aren't going to
> want to manipulate blobs. I am going argue that that capability
> shouldn't dictate the on disk storage format.

But you propose to change the storage format so that it decreases
performance for my application. I object. :)

Also fetching all blobs into memory is not so easy. I do not know, if
you're aware of it, but Sun JDK 1.4.x JVM can reference less than 1,5
GB RAM (if I'm not wrong, that is only 1 GB). My application needs
around 800 MB to support 150 simultaneous users, so for my blobs only
200 MB left. Sorry, that is not too much.

> We can continue to support blob seek with the simple expedient of
> fetching and decompressing the blob into memory and doing random
> access. The other 99.99% of the cases can get full advantage from
> blob compression.

Fetch 2 MB blob into memory in order to get 4k block from it? Not very
efficient, isn't it?

As to the type system, JDBC distinguishes BLOB, CLOB and BINARY data
types. I have no problem of having BLOB compressed, CLOB containing
charset info, but please give me BINARY with no compression, just as
it is implemented at present time.