Subject Re: [firebird-support] Slow at writing record cointaining BLOB fields
Author Helen Borrie
At 02:09 PM 30/12/2004 +0700, you wrote:

>We are running Firebird 1.0.0.796 around 2-3 years.
>Production Database running in relative small machine PIII 600Mhz, 384MB
>memory, 20GB dedicated IDE harddisk, Win2000Pro
>Network speed = 100Mhz
>Numbers of client to access database = 10 clients
>Database size around = 14MB
>BLOB fields Stores as = BLOB SUB_TYPE 0 SEGMENT SIZE 80
>
>The problem:
>============================
>We suffer very slow response when writing records containing BLOB fields
>(arround 30 seconds)
>
>Questions:
>============================
>1) How to improve the database writing speed ??

Missing info here:
--- how big are the blobs?
--- What is the database page size?

>2) Any idea if FB version 1.5.2.4731 will improve speeds ??

V. 1.5.x has a lot of speed enhancements. As to improving speeds for your
problem, it's impossible to say. If blob size or page size or the network
are the problem, probably not.

>3) From my research below the small server machine does not effect much
>on the response time.
>However the client machine will play a greater role on response time.
>Please advice if this situation is normal ??

No. Processor speed isn't anywhere as important as RAM. You have 384 Mb
for 10 users on one server, 512 Mb on the other. Getting more RAM on the
server will help.

>4) All my BLOB fields are stores as BLOB SUB_TYPE 0 SEGMENT SIZE 80 ..
>Please advice what is the best format/way to store the BLOB format ??

It doesn't matter. To the server, a blob is just a blob. With DSQL, even
the segment size doesn't matter, since the server will decide how to carve
up the blob for storage. Page size matters more. If you are trying to
store huge blobs and the page size is small, the engine will be making a
lot of requests to the OS for memory as the write proceeds.

You could try Ivan's trick to make more space available and reduce the
number of disk allocation requests at write time: on the server, create a
dummy table with no keys or indexes, in your case consisting of just an id
and a blob; run a script to fill it with several million identical records
; commit; then delete the table. This will increase the size of the
database file and all of that allocated space will be available for your
future blob writes.


>We have studied the response time below:
>===============================
>1) Test Database install on test server machine P4 2.6Ghz, 512MB,
>Win2000Pro
>Test Client on machine Celeron 1Ghz, 512MB, Win2000Pro
>Writing BLOB fields spend = around 30 seconds
>
>2) Test Database install on test server machine PIII 600Mhz, 384MB,
>Win2000Pro
>Test Client on machine Celeron 1Ghz, 512MB, Win2000Pro
>Writing BLOB fields spend = around 30 seconds
>
>3) Test Database install on test server machine PIII 600Mhz, 384MB,
>Win2000Pro
>Test Client on machine P4 2.6Ghz, 512MB, Win2000Pro
>Writing BLOB fields spend = around 18 seconds
>
>4) Test Database install on test server machine P4 2.6Ghz, 512MB,
>Win2000Pro
>Test Client on the same machine (local database access)
>Writing BLOB fields spend = around 8 seconds

8 secs is a long time to write a moderately sized blob locally (big blob?
small page size). But, aside from that, these numbers tell you *not* that
the clients are "worse" but that the network connections are the
problem. In one case the network is adding 125% to the cost; in the other
it is adding 275%. I would guess that the very slow one has a network
printer hanging off it, or is hosting a share that is being accessed often
by other users.

Are you by any chance trying to use named pipes as your network
protocol? On Win2K Pro there are limits for NP networking. If you are
using NP, change over to TCP/IP and test the difference.

Also take into account other traffic on the network, if you have file and
device sharing going on around the network. A noisy network will tend to
slow down the progress of data packets on the wire.

./hb