Subject Re: [firebird-support] Re: Above 60 G (60*1000*1000*1000) records per table for 50 byte compressed record
Author Ann Harrison
> Blobs can be stored in up to three levels.

Right, but you get into a number of problems backing up and restoring
blobs that are over 4GB with gbak.

> Level 0 blobs contain the blob data. In that case, the bytes following
> the blob header are actual data, complete with the length words we
> talked about earlier.

Right.

> Level 1 blobs are arrays of page numbers of level
> 0 blob pages. Level 2 blobs are arrays of page numbers of level 1
> blobs.
>
> Level 2 blobs are (I think) always stored on data pages.

Right. This means that the amount of data that can be stored in
a blob depends on the page size, bigger pages, bigger blobs.
The largest (theoretically) possible blob is
(16K - data page header and line index) / 4
* (16K - blob page header) / 4
* (16K - blob page header)

Which is a big number.

> Level 1 blobs
> are stored on data pages if they fit and if not, the engine creates a
> level 2 blob and puts that on a data page and puts the level 1 blobs on
> blob pages.

Right.


Something else you might consider is large record storage. While
generally, the number of records you can store in a table decreases
as you increase the size of the record, once the record exceeds
the page size, things get better, sometimes markedly better.
Large records are written tail first, so they fill overflow pages that
don't use record numbers first, then whatever is left over goes on
a data page and gets a record number. So you get almost as many
16494 byte (compressed) records as you do 10 byte records -
the difference being that a fragmented record header is 4 bytes
bigger than a normal record header.

Good luck,

Ann