Subject | Blob levels |
---|---|
Author | Dmitry Yemanov |
Post date | 2005-10-13T14:21:47Z |
All,
Now we have three blob levels (0, 1, 2) which define how the blob is
physically stored. Level 0 means on-page data, level 1 contains pointers to
data pages, level 2 contains pointers to pointers to data pages. Hence a
maximum blob size is somewhere near:
(page_size / 4) * (page_size / 4) * page_size
which means ~64MB for 1K pages and ~4GB for 4K pages.
I don't know whether this restriction is documented somewhere or not. But
the real problem is that the code never checks whether a level 2 blob can be
overfilled. Just try to load a 65MB blob into a 1K page size database and
the engine bugchecks. I've just added an overflow check into v2.0 to report
a better error message instead. But I have a question: do we consider this
limitation okay? Or perhaps we could extend the current scheme to have level
3, 4 etc blobs to address bigger data? Of course, this talk is not about
v2.0.
Opinions?
Dmitry
Now we have three blob levels (0, 1, 2) which define how the blob is
physically stored. Level 0 means on-page data, level 1 contains pointers to
data pages, level 2 contains pointers to pointers to data pages. Hence a
maximum blob size is somewhere near:
(page_size / 4) * (page_size / 4) * page_size
which means ~64MB for 1K pages and ~4GB for 4K pages.
I don't know whether this restriction is documented somewhere or not. But
the real problem is that the code never checks whether a level 2 blob can be
overfilled. Just try to load a 65MB blob into a 1K page size database and
the engine bugchecks. I've just added an overflow check into v2.0 to report
a better error message instead. But I have a question: do we consider this
limitation okay? Or perhaps we could extend the current scheme to have level
3, 4 etc blobs to address bigger data? Of course, this talk is not about
v2.0.
Opinions?
Dmitry