Subject | Re: [firebird-support] Database-Size and average fill of datapages |
---|---|
Author | Ann W. Harrison |
Post date | 2004-12-07T19:07:45Z |
At 06:10 AM 12/7/2004, Josef Gschwendtner wrote:
versions and deleted record stubs. It leaves 16 bytes (I think)
which is the size of a fragmented record header, per primary
record stored on page. So, if you have very small records -
one integer field for example - the system will leave a lot of
free space.
and a fast load. In an incremental load, when a page of the
index fills, it splits and half the previous contents go into
each page. If you're loading in index order, that means that
the index will end up half full. There's an easy code fix for
that, but we haven't done it yet. A fast load happens when
an index is created after all the data is stored and it fills
each index page completely before starting the next one.
with a record header, which is followed by one bit per field of
null indicators, rounded up to the nearest eight bits. The data
is laid out with all fields fully expanded and null valued fields
are set to a default for the type - 0 in the case of numbers,
November 17, 1858 for dates, and spaces for character strings.
Before being written to a data page, the null bits and data are
compressed, using a simple run-length algorithm. That turns a
double precision 0 into one length byte containing the value 8
and a data byte containing the value 0. If you've got three
successive double precision fields with null values, you still
get two bytes, one containing 24 and one containing 0.
Regards,
Ann
>Why is the average fill for data-pages not higher then 58%??? Can thisBy default, the system leave space on each data page for back
>be tuned?
versions and deleted record stubs. It leaves 16 bytes (I think)
which is the size of a fragmented record header, per primary
record stored on page. So, if you have very small records -
one integer field for example - the system will leave a lot of
free space.
>After backup/restore the numbers for datapages did not change, but allThat's the difference between an incremental load of the index
>index-pages have an average-fill of 89%-99% (1731 pages).
and a fast load. In an incremental load, when a page of the
index fills, it splits and half the previous contents go into
each page. If you're loading in index order, that means that
the index will end up half full. There's an easy code fix for
that, but we haven't done it yet. A fast load happens when
an index is created after all the data is stored and it fills
each index page completely before starting the next one.
>It seems that double precision fields with a NULL-value do not need theThat's true of all fields with null values. Each record starts
>space which is needed for non-NULL-values.
with a record header, which is followed by one bit per field of
null indicators, rounded up to the nearest eight bits. The data
is laid out with all fields fully expanded and null valued fields
are set to a default for the type - 0 in the case of numbers,
November 17, 1858 for dates, and spaces for character strings.
Before being written to a data page, the null bits and data are
compressed, using a simple run-length algorithm. That turns a
double precision 0 into one length byte containing the value 8
and a data byte containing the value 0. If you've got three
successive double precision fields with null values, you still
get two bytes, one containing 24 and one containing 0.
Regards,
Ann