Subject Re: [firebird-support] Re: firebird-support] restoring database with blobs
Author Ann W. Harrison
Ali Gökçen wrote:
>> My blob segment size is default (will be enough if I change it to
>> 1024 or more?) ,
>>my db page size is 2048, my disc cluser size is 4kB
>
>
> bad news..
> your FB wants to read or write a 2048 bytes page from/to disc,
> but OS does it as 4096 bytes disc IO. 1/2 of your disc IO is
> unawared.

The declared blob segment size doesn't matter. Blob segments are as
long as you (or your interface software) specifies. Old static
utilities used the segment size as an estimate of the likely transfer
size so they could preallocate buffer space. Dynamic interfaces don't
have that problem. Forget about segment size it doesn't matter. Do
transfer data to blobs in the largest chunks possible - the engine uses
16343 as its normal size for the blobs in metadata objects. Using a
larger default size doesn't waste space - Firebird stored only as much
blob as is passed - so if you buffer is 16K and you pas 40 bytes, only
40 bytes is stored.

Don't try to tune your blobs to disk or database page size. The
database always writes in page-sized blocks, and pages have some
overhead that varies with the page type.

Regards,


Ann