Subject | Re: [firebird-support] Data size |
---|---|
Author | Helen Borrie |
Post date | 2005-09-20T05:10:23Z |
At 02:23 PM 20/09/2005 +1000, you wrote:
structure of your database. If you're not, you can extract one using the
isql -x switch, or the equivalent metadata extraction utility in your
favourite DB Admin tool.
On an ad hoc basis, you can do SHOW TABLES tablename to get a report of all
of the interesting things about tablename, including its structure.
Or you can query the system tables (quite a complicated join query is
involved in doing this) to get a C-oriented list showing the fields and C
data types of your tables.
Or, any reasonable DB Admin tool will have an option to display the
structures of tables.
field to a table? - it might make a significant difference initially to the
size of the database file. Remember that Firebird "tables" are not finite
things in terms of storage. There is no "file for Table A". All data are
stored on database pages, which the DB engine takes from the filesystem as
required. When application code permits efficient garbage collection, disk
space from obsolete record versions (and blobs) will get recycled and
new-page allocations will settle down in time.
If data destined for a blob are consistently very small, the DB engine will
have tried (early in the life of the table) to store the blob data on the
table's data pages. If the engine is doing the "default" thing with blobs,
it will be storing their data on separate blob pages, apart from the data.
The page size defined for the database can make a difference to database
file growth. For example, if you have a 16K page size and the engine is
storing blobs that average, say, 1 Kb, then 15K are being wasted every time
you store or update a blob in that table, since a page can't store data
from multiple blobs.
On the other hand, if this unusual growth is happening solely as the
consequence of recent changes in applications, you can bet your life that
the new application code is stultifying garbage collection from one cause
or another - usually to do with transaction management.
Please, would you set your email client to post list messages in plain text?
^ heLen
>I am trying to determine where the bulk of my data is stored in myYou should be maintaining one or a set of DDL scripts to document the
>database. We have recently made updates to software that is backed by a
>firebird database and suddenly the database is huge. I need to identify
>what tables/columns are storing what sort of physical data.
>Can I achieve this?
structure of your database. If you're not, you can extract one using the
isql -x switch, or the equivalent metadata extraction utility in your
favourite DB Admin tool.
On an ad hoc basis, you can do SHOW TABLES tablename to get a report of all
of the interesting things about tablename, including its structure.
Or you can query the system tables (quite a complicated join query is
involved in doing this) to get a C-oriented list showing the fields and C
data types of your tables.
Or, any reasonable DB Admin tool will have an option to display the
structures of tables.
>I have a suspicion that it is to do with blob fields we have added forIf you suspect blobs might be involved - you recently added a new blob
>(small) images if this makes any difference.
field to a table? - it might make a significant difference initially to the
size of the database file. Remember that Firebird "tables" are not finite
things in terms of storage. There is no "file for Table A". All data are
stored on database pages, which the DB engine takes from the filesystem as
required. When application code permits efficient garbage collection, disk
space from obsolete record versions (and blobs) will get recycled and
new-page allocations will settle down in time.
If data destined for a blob are consistently very small, the DB engine will
have tried (early in the life of the table) to store the blob data on the
table's data pages. If the engine is doing the "default" thing with blobs,
it will be storing their data on separate blob pages, apart from the data.
The page size defined for the database can make a difference to database
file growth. For example, if you have a 16K page size and the engine is
storing blobs that average, say, 1 Kb, then 15K are being wasted every time
you store or update a blob in that table, since a page can't store data
from multiple blobs.
On the other hand, if this unusual growth is happening solely as the
consequence of recent changes in applications, you can bet your life that
the new application code is stultifying garbage collection from one cause
or another - usually to do with transaction management.
Please, would you set your email client to post list messages in plain text?
^ heLen