Subject | Refactoring database |
---|---|
Author | Steve Boyd |
Post date | 2010-01-04T16:56:56Z |
I am in the process of refactoring one of my Fierbird databases. I have
several tables with rather large rows. That is, a lot of columns and a
lot of bytes of data per row. These tables also tend to have a large
number of indices to allow quick searching of the tables in question.
These tables are heavily updated and heavily searched. I have a couple
of questions:
1. How smart is Firebird about updating the indices when I change the
value of the row's data? I am assuming that if the value of the
columns comprising a particular index don't change and the updated
row still fits into the same page then the corresponding index
would not need to be updated. Is this correct?
2. Since the value of the indexed columns tends to change
infrequently compared to other columns in the table, would I be
better off creating two tables? One for the index data and one
for the rest. The index table could be updated via trigger only
when the value of any of the indexed columns changes. I am
thinking that the searches would be faster if the rows being
search were physically smaller (more per page). I am also
thinking that the indices would tend to get less fragmented since
they wouldn't need to be updated as often when rows need to be
moved to new pages, etc.
3. I am currently storing comments and special instructions in
BLOBs. My original thinking was that this was a quick and dirty
way to provide unlimited capacity text fields. But the more I
work with Firebird, the more I am thinking that imposing a limit
to the size of the text fields and storing them as VARCHARs might
be the way to go. BLOB I/O seems to be rather slow. Also, BLOBs
cause problems when populating TDBGrids, since the BLOB data is
only fetched when needed and if the original transaction is
committed before the BLOB data is fetched by the grid you run the
risk of 'Invalid BLOB ID' errors. Is there a "best way" to store
random length textual data?
Any insight that you can provide is appreciated.
[Non-text portions of this message have been removed]
several tables with rather large rows. That is, a lot of columns and a
lot of bytes of data per row. These tables also tend to have a large
number of indices to allow quick searching of the tables in question.
These tables are heavily updated and heavily searched. I have a couple
of questions:
1. How smart is Firebird about updating the indices when I change the
value of the row's data? I am assuming that if the value of the
columns comprising a particular index don't change and the updated
row still fits into the same page then the corresponding index
would not need to be updated. Is this correct?
2. Since the value of the indexed columns tends to change
infrequently compared to other columns in the table, would I be
better off creating two tables? One for the index data and one
for the rest. The index table could be updated via trigger only
when the value of any of the indexed columns changes. I am
thinking that the searches would be faster if the rows being
search were physically smaller (more per page). I am also
thinking that the indices would tend to get less fragmented since
they wouldn't need to be updated as often when rows need to be
moved to new pages, etc.
3. I am currently storing comments and special instructions in
BLOBs. My original thinking was that this was a quick and dirty
way to provide unlimited capacity text fields. But the more I
work with Firebird, the more I am thinking that imposing a limit
to the size of the text fields and storing them as VARCHARs might
be the way to go. BLOB I/O seems to be rather slow. Also, BLOBs
cause problems when populating TDBGrids, since the BLOB data is
only fetched when needed and if the original transaction is
committed before the BLOB data is fetched by the grid you run the
risk of 'Invalid BLOB ID' errors. Is there a "best way" to store
random length textual data?
Any insight that you can provide is appreciated.
[Non-text portions of this message have been removed]