Subject | Re: [firebird-support] Performance with heavily updated table |
---|---|
Author | Ann Harrison |
Post date | 2013-08-09T19:33:17Z |
On Aug 9, 2013, at 9:20 AM, Tim Ward <tdw@...> wrote:
to zero, and that should mean that all the overflow pages have been released from
the table. Do you create any new records while you're doing the updates? I wrote
rather glibly about "overflow pages", but those pages are actually in the table and
can be used to store new records. That might reduce the record density - if new
records were created during the update phase. From your description, I'd guess
that's not what's happening.
I really don't know why the primary data pages that were created during the initial
storage of the records would be read more than once.
for you.
Good luck,
Ann
who really should read all her mail before responding.
>Hmmm.. Very odd indeed. Clearly the sweep succeeded, since the gap went
> The gap between OIT and next transaction was around 2,000,000. I did a
> manual sweep and the gap changed to zero, but the performance of the
> query did not change. (I'd said in an earlier message that I'd been told
> that running a sweep made no difference, and I have now seen this for
> myself.)
to zero, and that should mean that all the overflow pages have been released from
the table. Do you create any new records while you're doing the updates? I wrote
rather glibly about "overflow pages", but those pages are actually in the table and
can be used to store new records. That might reduce the record density - if new
records were created during the update phase. From your description, I'd guess
that's not what's happening.
I really don't know why the primary data pages that were created during the initial
storage of the records would be read more than once.
>Interesting that the writes are zero - something has done your garbage collection
> I installed Firebird 2.5 to see whether its trace feature would give any
> more information. But, run against exactly the same database (a copy of
> the file), it behaved as one would have expected, not as 2.1.5.
>
> Here is the STATS output from doing the COUNT(*) on 2.5:
>
> Reads = 1489
> Writes 0
> Fetches = 263870
>
> which is reasonable as gstat shows the table as occupying a bit under
> 1,500 pages, suggesting that a table scan reads each page once, which is
> exactly what one would expect.
for you.
>Reads went up by a factor of 80 while fetches (requests for a page in cache or not) went up by a factor of less than .5. Maybe 2.5 uses a large cache?
> Here is the STATS output from the same query on the same database using
> 2.1.5:
>
> Reads = 81652
> Writes 0
> Fetches = 393459
>Back versions.
> (1) What is the extra fetch per record?
> (2) Why in many but not all cases (80,000 out of 130,000) does it appearLuck and cache management.
> to cause a cache miss and page read?
>If 2.5 has a larger default cache size and you're using the default cache size, increase the cache in 2.1.5.
> and, of course:
>
> (3) How do I stop it happening?
>
Good luck,
Ann
who really should read all her mail before responding.