Subject Re: Above 60 G (60*1000*1000*1000) records per table for 50 byte compressed record
Author lmatusz
--- In firebird-support@yahoogroups.com, Ann Harrison <aharrison@...> wrote:
>
> On Mon, Apr 25, 2011 at 3:59 AM, lmatusz <matuszewski.lukasz@...> wrote:
>
> Stuff about blob-ids is deleted because I don't know whether that's changed
> and don't remember how it worked.
>
> > There is a decompose function which gives us line, slot and pp_sequence:
> >
> > line = static_cast<SSHORT>(value % records_per_page);
> > const ULONG sequence = static_cast<ULONG>(value / records_per_page);
> > slot = sequence % data_pages_per_pointer_page;
> > pp_sequence = sequence / data_pages_per_pointer_page;
> >
> > What is slot and line (pp_sequence is as i assume a pointer page number) ?
> >
> > Does slot is a pointer_page::ppg_page index ?
> > Does line is a data_page::dpg_rpt index ?
>
>
> Yes to both. The slot is the offset on the pointer page that holds the
> number of the data page and the line is the offset into the offset/length
> index on the data page that describes the record.
>
> Good luck,
>
>
> Ann
>

I a figured it out it seems that blobs takes regular dbkey number and a space for a blob - so above 60 G records are far far smaller. For smaller pieces of data for a field (field can take up to 64kB if it is one on a page) it is better to use VARCHAR or CHAR (by example with CHARACTER SET OCTETS) because it takes less space then a regular blob. For a bigger fields we must use blobs.

blh is always used for blob level 0 and 1 and 2 (content of blh::blh_page is different for level 0 (pure compressed data), level 1 and level 2).

Thanks again. Your support is outstanding.

Ɓukasz Matuszewski
http://lmatusz.blogspot.com