Subject blob_id is stored as a part of a record data
Author lmatusz
blob_id is always a record number (takes 64 bits)
(it must be divided into pp_sequence, slot and line)
record number which was stored in blob_id always
point to blh blob header.
In case of level 0 blobs, blh stores some fields and compressed blob data.
In case of level 1 blobs, they are composed of database pages of type
blob_page. These pages of type blob_page contains compressed blob data
(starting at address blob_page::blp_page).
(blh::blh_page is an array of SLONG numbers where
each number is a number of database page with a type of blob_page)
In case of level 2 blobs, blh::blh_page is an array of SLONG numbers where
each number is a number of database page with a type of blob_page.
In these pages of type blob_page the array blob_page::blp_page contains
database page numbers of blobs level 1.

These gives us maximum blob size at:
1Kb page size => 64 Mb
2Kb page size => 512 Mb
4Kb page size => 4 Gb
8Kb page size => 32 Gb
16kb page size => Big enough :-).

The funny thing about blobs that was not fitted into remaining database page size, is that blh::blh_page (which is SLONG, max value 2147483647) contains a array of numbers of database pages (blob level 1). So theoretical limit of database page number is 2147483647. The record number will not be wasted. So for a bigger blob (those not fitted into remaining database page size) the records are saved, and thus we have above 60 G records for record size (with blob_id, which is 8 bytes long) of about 37 bytes (RLE compressed naturally).

Even better if text blob is bigger then 16kB (for 16kB db page size) it will be stored as blob 1 (or even blob level 2) and there is support for comparisons of such blob with other field which is blob or with pure text (at the level of sql command) - but then it is limited to max char(n) length which is 32767. Also some STARTING [WITH], LIKE and CONTAINING second argument is limited to 32kB (32767 if we put it in pure sql command as plain text).

So i will have above 60 G limit for storing URI's if i will put the, on blob field (for every blob field put as much uri till it exceeds 16kB, every uri will be % encoded and separated by CRLF for quick tokenization).
I can then use in example LIKE '%http://some.new.uri%' (because i know that for my problem my uri http://some.new.uri will not take longer then 32767 bytes) to see that i doesn't have such uri in database table.
Things is better for me - because fetching one blob i will fetch more then one uri. In my previous solution i had one uri on one record in table (this was bad - because it limited my max records per table to about 6 G records). Now i have 10 times more :-)

Thanks again for your time Ann Harrison. It is really great to talk with you.

Best regards and good luck
Ɓukasz Matuszewski