Subject | Re: [firebird-support] Performance with heavily updated table |
---|---|
Author | Tim Ward |
Post date | 2013-08-08T08:28:08Z |
On 08/08/2013 08:20, Svein Erling Tysvær wrote:
running for around 12 hours. We then stop all the application processes
and measure that one query.
anything that would be helpful.
a performance testing box not a live system. I understand that parallel
activity will cause any database operation to take more wall-clock time,
but what I don't understand, and what I'm asking, is what would cause it
to apparently read each page of the table 100 times when once would
logically appear to be sufficient.
diagnose. It's just a nice easy simplification that demonstrates a table
scan apparently reading each page of the table 100 times when once would
logically appear to be sufficient. We have application logic that
currently does table scans on that table in rather more complex queries:
at 0.1s we can live with that until it comes high enough up the list for
us to worry about, at 10s we can't.
--
Tim Ward
[Non-text portions of this message have been removed]
>It gets worse over time, it's 100 times worse after the system has been
> I've got a few questions, Tim:
>
> Is it a sudden or gradual slowdown?
>
running for around 12 hours. We then stop all the application processes
and measure that one query.
>Exactly what numbers would you like? - we can re-run and generate
> Can you show us the statistics when the database is slow?
>
anything that would be helpful.
> The most common reasons for gradual slowdowns are one or moreThere was nothing else at all going on when we did the tests, this is on
> long-running transactions - and it doesn't have to be a transaction
> that does updates, the only transactions that you safely could allow
> to run for a long time are those that are both read-only AND read
> committed. Sudden slowdowns are more likely to be due to sweep or
> someone running CPU intensive queries (e.g. something like SELECT
> COUNT(*) FROM MYTABLE CROSS JOIN MYTABLE, which should return 10 000
> 000 000 if MYTABLE contained 100 000 records, or many users
> simultaneously running SELECT COUNT(*) FROM MYTABLE).
>
a performance testing box not a live system. I understand that parallel
activity will cause any database operation to take more wall-clock time,
but what I don't understand, and what I'm asking, is what would cause it
to apparently read each page of the table 100 times when once would
logically appear to be sufficient.
>That's not, of course, the actual application code we're trying to
> Other things to consider, is whether you actually need COUNT(*).
>
diagnose. It's just a nice easy simplification that demonstrates a table
scan apparently reading each page of the table 100 times when once would
logically appear to be sufficient. We have application logic that
currently does table scans on that table in rather more complex queries:
at 0.1s we can live with that until it comes high enough up the list for
us to worry about, at 10s we can't.
--
Tim Ward
[Non-text portions of this message have been removed]