Subject | Re: [firebird-support] Space consuming test |
---|---|
Author | Adomas Urbanavicius |
Post date | 2005-09-02T14:32:16Z |
Ivan Prenosil wrote:
records. And we need to add new row ADDITIONAL_COMMENTS (varchar 1600) ,
which by normalization theory should go into that table. BUT this row
will be used only for 0.1%.So, this means 99.9% of that coll will be
used as waste (28bytes * 29700000 ~ 831 MB ). (Of course other solution
one - to - one ref link)
>>I have made interesting test:The point of this test : lets say we have big table TB_ORDERS > 30mil
>>Create DB
>>Create Table T1 ( FIELD1 VARCHAR(1600) )
>>insert 2 mil records of NULL. (Commit every 100.000 recs)
>>Size became 141 MB
>>And that is :
>>1 record
>>141 000 / 2.000.000 ~ 0.0705 KB or ~ 70,5 bytes. So, FB is using 70
>>bytes of data for I dont know what.
>>(As far as I remember 70 bytes , it would be ~ 35 chars)
>>
>>
>
>Firebird's rle algorithm will compress 1600 bytes into 28 bytes.
>Each row has some overhead (informations about transaction-id,
>format number, delete flag, null flags, alignment, etc.)
>Each data page has some overhead (checksum, relation-id,
>array of pointers for records/record fragments, etc.)
>When inserting data, the page is not filled completely,
>some space will be left so subsequent updates will be faster
>(this reserving of space can be switched off).
>There are lot of other necessary structures - pointer pages
>so that data pages are well organized, transaction inventory
>pages, system tables, etc.
>
>And the most space volume is in fact wasted by inserting
>two millions of worthless data :-)
>
>Ivan
>
>
records. And we need to add new row ADDITIONAL_COMMENTS (varchar 1600) ,
which by normalization theory should go into that table. BUT this row
will be used only for 0.1%.So, this means 99.9% of that coll will be
used as waste (28bytes * 29700000 ~ 831 MB ). (Of course other solution
one - to - one ref link)