Subject | Re[2]: [Firebird-Architect] blobs causes table fragmentation. |
---|---|
Author | Dmitry Kuzmenko |
Post date | 2004-10-04T20:52:47Z |
Hello, Arno!
Monday, October 4, 2004, 4:49:50 PM, you wrote:
I don't know code, maybe yes.
AB> are used for the records?
yes, exactly.
AB> Beside your solution it would be usefull to compress blobs (that fit into
AB> data-page) also.
I had no idea about compressed blobs, but know test results about
SQZ_BLOCK parameter in Yaffil - it sets record no-compression limit.
For example, tests showed that using SQZ_BLOCK = 512 (not 256 like
now) updates speeds up 2-3 times. But, of course, database will
grow faster. And, here is another one additional tuning parameter
for table :-) Looking in IBAnalyst at some production databases I
see average record length ~235 bytes and average version length
~739 bytes. In this case eliminating unneeded compression
will produce good results in performance.
Most databases (statistics I have), of course, don't have record
size greater than 300 bytes, so record compression works, but
what is better - speed or database size - I can't say.
--
Dmitri Kouzmenko
Monday, October 4, 2004, 4:49:50 PM, you wrote:
>> As I understand current behavior, blobs less than 256 bytes areAB> Is this hard-coded limited by 256?
>> stored at the same data page, in record. If blob is bigger it
>> is stored at blob page, not data page.
I don't know code, maybe yes.
>> This makes table more "fragmented" and seriously slowdownAB> Do you meant blob-data expands record-length and due that more data-pages
>> record retrieval if 'select' does not selects blob fields.
AB> are used for the records?
yes, exactly.
>> p.s. Also, I think (don't know the code) this will be fasterAB> IIRC blobs aren't compressed.
>> than trying to fit blob into the data page.
AB> Beside your solution it would be usefull to compress blobs (that fit into
AB> data-page) also.
I had no idea about compressed blobs, but know test results about
SQZ_BLOCK parameter in Yaffil - it sets record no-compression limit.
For example, tests showed that using SQZ_BLOCK = 512 (not 256 like
now) updates speeds up 2-3 times. But, of course, database will
grow faster. And, here is another one additional tuning parameter
for table :-) Looking in IBAnalyst at some production databases I
see average record length ~235 bytes and average version length
~739 bytes. In this case eliminating unneeded compression
will produce good results in performance.
Most databases (statistics I have), of course, don't have record
size greater than 300 bytes, so record compression works, but
what is better - speed or database size - I can't say.
--
Dmitri Kouzmenko