Subject | Relatively slow extraction of BLOBs |
---|---|
Author | lele@nautilus.homeip.net |
Post date | 2007-03-01T14:31:17Z |
Hi all,
I built an application that stores various kinds of resources in a Firebird database. I have a set of tables that describes each resource (be it another piece of application like a DLL, or an Image, or whatever): the key one is like the following
CREATE TABLE RESOURCES (
IDRESOURCE SMALLINT NOT NULL,
EXTENSION CHAR(3) NOT NULL,
RESOURCE BLOB SUB_TYPE 0 SEGMENT SIZE 32767
);
Given a resource, the application loads it, compresses its content with bzip2 and finally sends it to the database. Obviously it allows the other way around, that is "downloads" of a given resource, uncompressing it and then exposing the content in a stream, or directly on the local filesystem.
I tried several approaches, and I was able to get reasonable performances, but I guess there's still room for improvement: while I'm able to upload an 8.8Mb of a single resource (resulting in 1.1Mb once bzipped) in a mere 6.5 seconds, it takes 55 seconds to extract it. The bottleneck is the extraction, not the decompression (once disable it still takes a little more than 54 seconds to store the resulting 1.1Mb on my local disk).
This is a Delphi application, using FreeIBComponents to access the database, but I get roughly the same timings either with IBExpert, or with Python and kinterbasdb, even operating directly on the server, so I don't think that matters at all.
I googled around, and I found several docs that talk about BLOB: most say that "segment size is a relics from the past", but I seem to get slightly better timings using an explicit "big" segment as showed above.
Does anybody have an hint or an explanation of the different timings between up- and down- load?
Thank you in advance,
bye, lele.
I built an application that stores various kinds of resources in a Firebird database. I have a set of tables that describes each resource (be it another piece of application like a DLL, or an Image, or whatever): the key one is like the following
CREATE TABLE RESOURCES (
IDRESOURCE SMALLINT NOT NULL,
EXTENSION CHAR(3) NOT NULL,
RESOURCE BLOB SUB_TYPE 0 SEGMENT SIZE 32767
);
Given a resource, the application loads it, compresses its content with bzip2 and finally sends it to the database. Obviously it allows the other way around, that is "downloads" of a given resource, uncompressing it and then exposing the content in a stream, or directly on the local filesystem.
I tried several approaches, and I was able to get reasonable performances, but I guess there's still room for improvement: while I'm able to upload an 8.8Mb of a single resource (resulting in 1.1Mb once bzipped) in a mere 6.5 seconds, it takes 55 seconds to extract it. The bottleneck is the extraction, not the decompression (once disable it still takes a little more than 54 seconds to store the resulting 1.1Mb on my local disk).
This is a Delphi application, using FreeIBComponents to access the database, but I get roughly the same timings either with IBExpert, or with Python and kinterbasdb, even operating directly on the server, so I don't think that matters at all.
I googled around, and I found several docs that talk about BLOB: most say that "segment size is a relics from the past", but I seem to get slightly better timings using an explicit "big" segment as showed above.
Does anybody have an hint or an explanation of the different timings between up- and down- load?
Thank you in advance,
bye, lele.