Subject | Re: [firebird-support] Performance with heavily updated table |
---|---|
Author | Tim Ward |
Post date | 2013-08-09T12:20:06Z |
On 08/08/2013 11:49, Svein Erling Tysvær wrote:
manual sweep and the gap changed to zero, but the performance of the
query did not change. (I'd said in an earlier message that I'd been told
that running a sweep made no difference, and I have now seen this for
myself.)
I installed Firebird 2.5 to see whether its trace feature would give any
more information. But, run against exactly the same database (a copy of
the file), it behaved as one would have expected, not as 2.1.5.
Here is the STATS output from doing the COUNT(*) on 2.5:
Reads = 1489
Writes 0
Fetches = 263870
which is reasonable as gstat shows the table as occupying a bit under
1,500 pages, suggesting that a table scan reads each page once, which is
exactly what one would expect.
Here is the STATS output from the same query on the same database using
2.1.5:
Reads = 81652
Writes 0
Fetches = 393459
and, with doing 81,000 reads instead of 1,500 reads, it not surprisingly
takes *much* longer.
If we backup/restore the database the numbers for 2.1.5 revert to around
1,500. If we then run updates on this table for a few hours the numbers
get gradually worse again.
We don't want to stop all our processes dead in order to do a backup and
restore every couple of hours. What other solutions are there?
There are by the way just under 130,000 records in the table. I note
that using 2.5 there are just over 260,000 fetches, which is a bit more
than two per record, although I haven't worked out what the second is.
Using 2.1.5 on the database that hasn't recently been restored there are
just over 360,000 fetches, which is a bit more than *three* per record.
So one might ask:
(1) What is the extra fetch per record?
(2) Why in many but not all cases (80,000 out of 130,000) does it appear
to cause a cache miss and page read?
and, of course:
(3) How do I stop it happening?
--
Tim Ward
[Non-text portions of this message have been removed]
>The gap between OIT and next transaction was around 2,000,000. I did a
> > Can you show us the statistics when the database is slow?
> >
> >Exactly what numbers would you like? - we can re-run and generate
> anything that would be helpful.
>
> I'm only wondering if the gap between oldest (interesting/active)
> transaction and next transaction is noticeable. Though this is
> starting to sound like this could be something else, and that means
> that I've got no clue.
>
manual sweep and the gap changed to zero, but the performance of the
query did not change. (I'd said in an earlier message that I'd been told
that running a sweep made no difference, and I have now seen this for
myself.)
I installed Firebird 2.5 to see whether its trace feature would give any
more information. But, run against exactly the same database (a copy of
the file), it behaved as one would have expected, not as 2.1.5.
Here is the STATS output from doing the COUNT(*) on 2.5:
Reads = 1489
Writes 0
Fetches = 263870
which is reasonable as gstat shows the table as occupying a bit under
1,500 pages, suggesting that a table scan reads each page once, which is
exactly what one would expect.
Here is the STATS output from the same query on the same database using
2.1.5:
Reads = 81652
Writes 0
Fetches = 393459
and, with doing 81,000 reads instead of 1,500 reads, it not surprisingly
takes *much* longer.
If we backup/restore the database the numbers for 2.1.5 revert to around
1,500. If we then run updates on this table for a few hours the numbers
get gradually worse again.
We don't want to stop all our processes dead in order to do a backup and
restore every couple of hours. What other solutions are there?
There are by the way just under 130,000 records in the table. I note
that using 2.5 there are just over 260,000 fetches, which is a bit more
than two per record, although I haven't worked out what the second is.
Using 2.1.5 on the database that hasn't recently been restored there are
just over 360,000 fetches, which is a bit more than *three* per record.
So one might ask:
(1) What is the extra fetch per record?
(2) Why in many but not all cases (80,000 out of 130,000) does it appear
to cause a cache miss and page read?
and, of course:
(3) How do I stop it happening?
--
Tim Ward
[Non-text portions of this message have been removed]