Subject Re: [firebird-support] database corruption without system restart
Author Ann W. Harrison
Michal Rzewuski wrote:
> it seems that this is a source of my problem :( what is the exact
> limit? how to calculate it?

The calculation is complicated because it depends on the actual size
of compressed records in the table.

If the compressed record size is 10 bytes you get 27,042,386,672
records. If the compressed record size is 50 bytes, 54,488,391,056
records. If the compressed record sizes is 100 bytes 62,405,507,705.

> Primary pointer page: 143, Index root page: 144
> Data pages: 4310775, data page slots: 4435591, average fill: 8%
> Fill distribution:
> 0 - 19% = 2
> 20 - 39% = 2
> 40 - 59% = 0
> 60 - 79% = 0
> 80 - 99% = 4310771
> what are the interesting values for me "data pages" or "data page slot"?

Data pages are pages - the number of 8192 byte blocks devoted
to holding records for the table. Data page slots are the number
of entries in pointer pages for the table. The two are different
because some number of pages have been released and their slots
have not been reused. Quite a lot, as it happens - more than 100
thousand. A backup/restore would fix that.

> i had also the idea to turn on full data page filling but i'm afraid
> because i don't know the possible performance impact. what could i expect?

That will help by allowing the system to store more records on each
page. The problem is running out of record numbers. The record number
is a 4 byte quantity, so you'd think you'd get 4Gb of records. However
the record number space isn't dense, and the more records you store on
a single page, the more dense it becomes, thus the more records you
can store.

The 100 thousand empty data page slots also waste record number space.
Interesting... I wonder why those slots aren't being reused... Hmmm...

As for performance without reserving space, that depends on whether you
normally update or delete records. The reserved space is used for back
versions - if you have them, you need it.