Subject Re: [firebird-support] Re: Maximum number of rows in Firebird 2.1 and up (is it 1099511627776 records ?)
Author Ann Harrison
On Sun, Apr 17, 2011 at 12:39 PM, lmatusz <matuszewski.lukasz@...> wrote:

> First of all the 32 TB limit for one table is true
> (32 TB = data + housing structures of data pages).
> It is stated in firebirdfaq (faq61): http://www.firebirdfaq.org/faq61
> (you are questioning this source of information ?)

Yes, as a matter of fact, I am. I don't introduce myself often, but you
should be aware that I was first involved with InterBase in 1984 and,
though I stopped working on Firebird five years ago, I have done a
fair bit of internals work.

The documentation is correct that you can't have more than 32TB
of records in any table. But that's a negative statement. It's not
possible for a person to jump more than three miles horizontally.
That doesn't mean that a person can jump 2.8 miles horizontally.
Nor can you store 32TB -1 records in a table, unless each record
has a length of zero.

<dpg> is a data page. <ppg> is a pointer page. The record
number space limited this way:

<max records on dpg> * <max dpg per ppg> * <max ppg> < 2**40

The middle factor, data pages per pointer page, is determined by the
page size, so the highest possible number of pointer pages per table
depends on how many records you assume you could possibly have
on a data page. In Firebird, the assumption is based on storing zero
length records. As I said before, a zero length record uses 17 bytes
of storage, of which 13 are the record header and 2 are the stored
length and 2 are the offset on the data page.


Here's a matrix of page sizes, the number of zero byte records that
fit on a page, the maximum number of pointer pages that can exist,
given that the number of records per page times a fixed number of
data pages per pointer page, times the number of pointer pages must
not exceed 40 bits.


0 byte records
Page size per page Max Pointer Pages Data Pages per Table
1024 59 80,544,055 18,766,764,727
2048 119 19,521,763 9,253,315,677
4096 239 4,806,289 4,594,812,603
8192 480 1,192,462 2,289,526,907
16384 962 297,064 1,142,803,721

Yes, if you had zero length records, you could store 962 of them on a
16K page and get 1,099,511,627,520 records in a table. But if you
have 10 byte records, you only get 692,285,098,809 records because
you use only 37 of the 59 record number that are allocated for each
1K page. This is true regardless of the page size. With 10 byte
records on a 2K page, you use 75 of the 119 record numbers.
With a 4K page you use 151 of the 239 allocated, 8K pages of
10 byte records use 302 of the 480 numbers allocated. 16K pages
use 606 of 962.

The net result is that the number of records that can be stored in
a table depends on the record size, not the page size. The larger
the records, the smaller the number that can be stored in a table.

Regardless of page size, you can store 692,285,098,809 10 byte
records in a table, or 278,980,562,207 50 byte records, or
159,758,099,725 100 byte records. Record size is the compressed
data length, including null bytes.

If Firebird assumed that records were actually 10 bytes of data on
top of the 17 overhead bytes, then you could store over 1 trillion
10 byte records in a table, but if you had shorter records you would
not be able to fill pages. Since the major time sink in a database
is reading and writing pages, having them full is a good thing.

Good luck,

Ann