Subject | Re: [firebird-support] BLOB segment size: what is it & is it needed when saving BLOBs? |
---|---|
Author | Ann Harrison |
Post date | 2011-11-18T17:10:07Z |
Reinier Olislagers wrote:
some higher-level tools that wanted a hint as to the size chunks of
blob that would be convenient to handle. To the best of my knowledge,
nothing uses it now. Even when it was used, it did not limit the size
segment that could be passed in and out.
That interface was designed in 1982 - think about the increase in
available RAM since then.
that at various places there are 16 bit integers that describe the
length of things. Passing larger chunks has no effect on the storage
size.
cache by concatenating the segments if necessary. If the total size
of the blob is less than a page, it will be written on a data page.
If not, Firebird builds a vector of page numbers that hold the parts
of the blob, writes out the blob pages, then writes the vector to a
data page. If the vector itself won't fit on a page, Firebird writes
the vector to a series of pages, keeping a vector of blob/vector
pages.
memory sizes of machines in the early 1980's.
Good luck,
Ann
>Your understanding is wrong. The segment size was a suggestion for
> I came across BLOB segment size and understand that, when writing BLOBs,
> you need to write in chunks smaller than or equal to the segment size.
some higher-level tools that wanted a hint as to the size chunks of
blob that would be convenient to handle. To the best of my knowledge,
nothing uses it now. Even when it was used, it did not limit the size
segment that could be passed in and out.
That interface was designed in 1982 - think about the increase in
available RAM since then.
>Pass the largest chunks that are convenient to handle, remembering
> Presumably having a larger segment size might improve performance for
> reading/writing large BLOBs, but might lead to bigger storage
> requirements per BLOB?
that at various places there are 16 bit integers that describe the
length of things. Passing larger chunks has no effect on the storage
size.
>As you pass in segments of a blob, they are written to empty pages in
> I did find this post by Helen Borrie:
>
> http://tech.groups.yahoo.com/group/firebird-support/message/94611
>> From Firebird's perspective, can any amount of data be written
>>to a blob in a single operation? Without segmenting it?
> Broadly, yes. In the modern era the segmenting of blobs occurs at the server
> side, according to some algorithm determined by page size and probably also
> whether blob data is stored on blob pages or data pages.
cache by concatenating the segments if necessary. If the total size
of the blob is less than a page, it will be written on a data page.
If not, Firebird builds a vector of page numbers that hold the parts
of the blob, writes out the blob pages, then writes the vector to a
data page. If the vector itself won't fit on a page, Firebird writes
the vector to a series of pages, keeping a vector of blob/vector
pages.
>Relic of ancient times.
> My questions:
> 1. Do Firebird users need to know more about segment size (and if so,
> what ;) ), or is it indeed just a relic of the past?
> 4. Seeing Helen's post, can the code be changed to just output theThe whole blob segmentation mechanism was a way to get around the tiny
> entire BLOB in one go or have I misunderstood?
memory sizes of machines in the early 1980's.
Good luck,
Ann