Subject | Re: [firebird-support] questions on firebird, mainly obscure |
---|---|
Author | Ann W. Harrison |
Post date | 2007-11-09T18:27:54Z |
Carrell Alex wrote:
the average record is about 270 bytes. The table has 301,679 primary
records. That should be about what you get with select count(*)...
The longest back chain is three records. That means that a sweep
isn't going to give you back much at all.
The fill levels look OK on data, but seem a bit bad on indexes. The
index PART_BY_PARTNO has a very small average data length which should
allow high fill levels (>85%) bit nearly a third of the pages are
between 40 and 60 percent filled. Firebird recombines index pages
when the fill level of adjacent pages goes below 33% (IIRC), which
is probably why there are not pages less than 40% full.
are less than 60% full. You may find that deactivating and
reactivating those indexes will improve your performance.
answer. When you increase the page size, you can lower the cache
size and still cache the same amount of data and indexes. That might
be a short term fix.
Good luck,
Ann
>>The records are run length compressed. As a result of compression,
> This means little to me:
> Average record length: 270.13, total records: 301679
the average record is about 270 bytes. The table has 301,679 primary
records. That should be about what you get with select count(*)...
> Average version length: 63.68, total versions: 69, max versions: 3This is the interesting part. You have only 69 back record versions.
The longest back chain is three records. That means that a sweep
isn't going to give you back much at all.
The fill levels look OK on data, but seem a bit bad on indexes. The
index PART_BY_PARTNO has a very small average data length which should
allow high fill levels (>85%) bit nearly a third of the pages are
between 40 and 60 percent filled. Firebird recombines index pages
when the fill level of adjacent pages goes below 33% (IIRC), which
is probably why there are not pages less than 40% full.
>The second index is even worse... three quarters of the pages
> Index PART_BYPARTNO (13)
> Depth: 3, leaf buckets: 643, nodes: 301462
> Average data length: 1.00, total dup: 3, max dup: 1
> Fill distribution:
> 0 - 19% = 0
> 20 - 39% = 0
> 40 - 59% = 221
> 60 - 79% = 4
> 80 - 99% = 418
>
are less than 60% full. You may find that deactivating and
reactivating those indexes will improve your performance.
>More ram is a cheap way to better performance, and probably the best
> Index PURCHTRAN_BYPARTNO (2)
> Depth: 3, leaf buckets: 2092, nodes: 870334
> Average data length: 0.00, total dup: 778033, max dup: 18128
> Fill distribution:
> 0 - 19% = 0
> 20 - 39% = 1
> 40 - 59% = 1543
> 60 - 79% = 51
> 80 - 99% = 497
>
> With page size of 4096
> - using a page size of 8192, the are better fill ratios, I changed
> it down, to save ram and use what was saved to lower the lock hash chains,
> allround performance improved. However for indexes the depth went up by 1,
> (not above 3) - I am tempted to go back to 8192, but the server will have to
> swap to cope(big slow down). We are looking at more ram. I asking these
> questions as in months time I will overhaul the servers.
answer. When you increase the page size, you can lower the cache
size and still cache the same amount of data and indexes. That might
be a short term fix.
Good luck,
Ann