Subject Re: [firebird-support] Refactoring database
Author Ann W. Harrison
Steve Boyd wrote:
>
> 1. How smart is Firebird about updating the indices when I change the
> value of the row's data? I am assuming that if the value of the
> columns comprising a particular index don't change and the updated
> row still fits into the same page then the corresponding index
> would not need to be updated. Is this correct?

If the value doesn't change, the index won't be updated. If the
updated record doesn't fit on the original page, it will be fragmented
and the first segment will stay on that page so index references stay
the same.

> 2. Since the value of the indexed columns tends to change
> infrequently compared to other columns in the table, would I be
> better off creating two tables? One for the index data and one
> for the rest. The index table could be updated via trigger only
> when the value of any of the indexed columns changes. I am
> thinking that the searches would be faster if the rows being
> search were physically smaller (more per page). I am also
> thinking that the indices would tend to get less fragmented since
> they wouldn't need to be updated as often when rows need to be
> moved to new pages, etc.

Rows aren't moved. I'm not convinced that the gain in performance by
reducing the size of the indexed records would outweigh the cost of
retrieving a second record.

> 3. I am currently storing comments and special instructions in
> BLOBs. My original thinking was that this was a quick and dirty
> way to provide unlimited capacity text fields. But the more I
> work with Firebird, the more I am thinking that imposing a limit
> to the size of the text fields and storing them as VARCHARs might
> be the way to go. BLOB I/O seems to be rather slow. Also, BLOBs
> cause problems when populating TDBGrids, since the BLOB data is
> only fetched when needed and if the original transaction is
> committed before the BLOB data is fetched by the grid you run the
> risk of 'Invalid BLOB ID' errors. Is there a "best way" to store
> random length textual data?

The 'Invalid BLOB ID' error means that the row has been deleted or a
change to the contents of the blob has been committed. You might catch
that error and refresh the record. A problem with very large VARCHAR
fields is that the run-length compression algorithm uses a byte for the
length of the repeated character, so an empty 32K VARCHAR takes ~500
bytes - if my calculation is correct. (Which is rare.) Which means
that if your comment or instruction is 500 bytes, it takes almost twice
that amount of space.
>

Best regards,

Ann