Subject | Re: [firebird-support] Re: Above 60 G (60*1000*1000*1000) records per table for 50 byte compressed record |
---|---|
Author | Ann Harrison |
Post date | 2011-04-26T19:38:49Z |
> Blobs can be stored in up to three levels.Right, but you get into a number of problems backing up and restoring
blobs that are over 4GB with gbak.
> Level 0 blobs contain the blob data. In that case, the bytes followingRight.
> the blob header are actual data, complete with the length words we
> talked about earlier.
> Level 1 blobs are arrays of page numbers of levelRight. This means that the amount of data that can be stored in
> 0 blob pages. Level 2 blobs are arrays of page numbers of level 1
> blobs.
>
> Level 2 blobs are (I think) always stored on data pages.
a blob depends on the page size, bigger pages, bigger blobs.
The largest (theoretically) possible blob is
(16K - data page header and line index) / 4
* (16K - blob page header) / 4
* (16K - blob page header)
Which is a big number.
> Level 1 blobsRight.
> are stored on data pages if they fit and if not, the engine creates a
> level 2 blob and puts that on a data page and puts the level 1 blobs on
> blob pages.
Something else you might consider is large record storage. While
generally, the number of records you can store in a table decreases
as you increase the size of the record, once the record exceeds
the page size, things get better, sometimes markedly better.
Large records are written tail first, so they fill overflow pages that
don't use record numbers first, then whatever is left over goes on
a data page and gets a record number. So you get almost as many
16494 byte (compressed) records as you do 10 byte records -
the difference being that a fragmented record header is 4 bytes
bigger than a normal record header.
Good luck,
Ann