Subject | Re: [firebird-support] Firebird 3.0 database page size |
---|---|
Author | Ann Harrison |
Post date | 2011-04-28T20:26:45Z |
On Thu, Apr 28, 2011 at 4:19 PM, lmatusz <matuszewski.lukasz@...> wrote:
number of records
that can be stored depends on the size of the records, not the size of the page.
Larger page sizes waste more record numbers.
variety of bugs, which
is why Firebird doesn't have a 32K page now. I think the problem is
in managing memory
blocks, which tend to have a signed int size, so they won't hold a 32K
page plus the block
overhead.
Do you really have that much data?
Good luck,
Ann
> Hi to all.I doubt that it will, since you've already demonstrated that the
> Does anybody knows if database page size in Firebird 3.0 will be expanded to 32768 bytes ? This will allow to store greater number of records per table.
number of records
that can be stored depends on the size of the records, not the size of the page.
Larger page sizes waste more record numbers.
> For about average record size little greater then 1kB it would be above 14.5 G records (while for 16384 it is 3,5 G records). This would give us upper limit for RLE compressed data (without overhead) in one table for about 15 TB (compression ratio for Medical Images for RLE is around 2.0 - 2.7 - so overall size will be above 30 TB for one table - and facts stated on firebirdfaq will be then met. I am not thinking of blobs in above assumptions).Unh, I doubt that Firebird's RLE works like the one you mention.
> I am thinking of modifying source code of Firebird 2.5 to have it work with 32kB db page size, but i am worried if it will be simple operation and does such db files will be correctly recognized by in example SQL Manager For Interbase And Firebird ?The developers have tried changing the page size and ran into a
variety of bugs, which
is why Firebird doesn't have a 32K page now. I think the problem is
in managing memory
blocks, which tend to have a signed int size, so they won't hold a 32K
page plus the block
overhead.
Do you really have that much data?
Good luck,
Ann