Subject Above 60 G (60*1000*1000*1000) records per table for 50 byte compressed record
Author lmatusz
If we have following tabel

CREATE TABLE simple
(
PK BIGINT NOT NULL,
URL BLOB SUB_TYPE 1 CHARACTER SET ASCII,
AUDYT_UT_UZY_ID BIGINT NOT NULL,
AUDYT_DT TIMESTAMP NOT NULL,
AUDYT_UM_UZY_ID BIGINT,
AUDYT_DM TIMESTAMP
);

where PK is primary key(8 bytes), URL is just an blob(8 bytes) that contents is address of url, AUDYT_UT and AUDYT_DT and AUDYT_UM and AUDYT_DM are auditing fields - where record size is 40 bytes we have above 60 G records per table (if blob itself are smaller then (db_page_size - some_overhead))

I have looked in source of Firebird V2 engine, and what i am worried about is that temporary blob id (the contents of URL field) is only 32 bit long - but materialized blob id is good and takes 40 bits.

My question is how blob_id is decomposed to page_number in database file ?

There is a decompose function which gives us line, slot and pp_sequence:

line = static_cast<SSHORT>(value % records_per_page);
const ULONG sequence = static_cast<ULONG>(value / records_per_page);
slot = sequence % data_pages_per_pointer_page;
pp_sequence = sequence / data_pages_per_pointer_page;

What is slot and line (pp_sequence is as i assume a pointer page number) ?

Does slot is a pointer_page::ppg_page index ?
Does line is a data_page::dpg_rpt index ?

Please help.

Ɓukasz Matuszewski

http://lmatusz.blogspot.com