Subject Re: [firebird-support] Refactoring database
Author Steve Boyd
Ann W. Harrison wrote:
> Steve Boyd wrote:
>
> If the value doesn't change, the index won't be updated.
This is good to know. I rather assumed this would be the case, but you
never know until you ask.
> If the
> updated record doesn't fit on the original page, it will be fragmented
> and the first segment will stay on that page so index references stay
> the same.
>
I guess this is one of those situations where you can't really win. You
end up increasing I/O to read the record but save I/O but save I/O when
updating.
> Rows aren't moved. I'm not convinced that the gain in performance by
>
> reducing the size of the indexed records would outweigh the cost of
> retrieving a second record.
>
>
I wasn't convinced either, especially in light of the answer to #1
above, but it never hurts to get a second opinion.
> The 'Invalid BLOB ID' error means that the row has been deleted or a
> change to the contents of the blob has been committed. You might catch
> that error and refresh the record.
Actually I knew that. For purposes of displaying the grid, CASTing the
BLOB to VARCHAR gets around the problem as well. I don't need more that
the first hundred or so characters for the grid so this was an easy, low
cost solution to the problem. Still, it is moderately annoying.
> A problem with very large VARCHAR
> fields is that the run-length compression algorithm uses a byte for the
> length of the repeated character, so an empty 32K VARCHAR takes ~500
> bytes - if my calculation is correct. (Which is rare.) Which means
> that if your comment or instruction is 500 bytes, it takes almost twice
> that amount of space.
>
>
>
I did not know that. I knew VARCHARs were compressed but didn't give
much though to how it was done.

Let me ask something else then. I end up having to reorg this database
every 6 months or so. It gets to the point that performance just goes
down hill to the point where no one can get any work done. The problem
seems to be concentrated in one particular table that has large rows,
lots of indices and the data is highly volatile. And yes, I have gone
over my code with a fine tooth comb and monitored the statistics. I am
not leaving updatable transactions open for long periods of time.
Because of the realities of using TDBGrid, there are some read-only,
read-committed transactions that remain open for upwards of a couple of
minutes at a time, but my understanding is that these transactions
shouldn't cause problems.

Are there things that I can do, in terms of the way the data is
structured, to reduce the fragmentation and / or back versions? There
are are a handful of columns, having to do with status and status dates,
that are the cause of most of the updates to the table. Would moving
these volatile columns to their own table make any difference?



[Non-text portions of this message have been removed]