Subject Re: Fbserver eating CPU...not taking advantage of memory
Author Adam
> As for the machine, the machines that work well have hyperthreaded cpus
> (though we have tested in many single cpu environments without the
serious
> problem we see on this one machine). The machine with the problem is a
> particularly slow cpu (1.2 GH something-er-other – probably not a
pentium).
>
>
> Different data making the optimiser choose a different plan for the
> same query?
>
> Same application, could be looking at more data but we do not
think
> so.
>
>
> My question was intended to see if there were some known issues that
would
> prevent FB from using more memory, which might therefore put more
pressure
> on the CPU (I know this can happen with SQL Server and Oracle).
>


Why do you think that it is possible to replace CPU usage with RAM?
CPU is what processes data, more RAM just gives it more options in the
manner it processes. While it is true that some operations can be done
faster at the expense of extra memory, not all operations can be.

The normal reason for this sort of behaviour is a rogue query. Often
queries perform well in test environments because in order to speed up
development, you tend to use smaller datasets to test against. A full
table scan of a 100 record table is not going to be noticably slow, if
it has 10000 records, let me assure you it will be noticed.

> Inactive indexes?
>
> I would say "no" because all machines use the same app. But
the app
> drops and recreates indexes and it is remotely possible that the
indexes did
> not get rebuilt.

Unless you are pumping data, and you have specific performance targets
for the insert operation into the table, I would avoid stuffing around
with tne indices at all.

There are a couple of things you may not have considered.

1. Triggers!

What queries are they running, and when are they being run. There is
often a lot of room for optimisation at this point. Be aware of
triggers recursively impacting the same table.

2. Language.

It concerns me when you say that you are going through the code to see
exactly what the queries are. It may be a throw-away line meaning you
will recheck everything, but it may also mean that they were never
checked for plans in the first place.

I would suggest using a tool such as IBPlanalyzer. For every query you
run, check the plan. Make sure that table scans do not happen, or if
they do you can guarantee that only a few (<100) records will ever be
in that table. Firebird 1.5 often performs better when some where
conditions are moved into the join condition, but this is related to
the plan it chooses, so it will become evident if you check what
exactly is being run.

I think you are probably barking up the wrong tree with respect to the
RAM theory. I bet it is a query related performance issue.

But finally, some encouragement.

We had a problem where a report would take 10 minutes to open on site.
After bringing the database back to analyse, I noticed that the
selected plan involved a tablescan because it was written poorly and
it had left joins (that could have really been inner joins). Spending
10 minutes with the query in the optimiser, the report now opens on
that site in 2 seconds.

Adam