Subject | Re: Server Resource problem |
---|---|
Author | Adam |
Post date | 2006-02-15T11:06:54Z |
> Looking at the server console indicates that Firebird is the culprit.Firebird is just a process that does work on behalf of your client
> This seems to be happeneing on a daily basis and started about a
> month ago. We have also seen other instances of this happening on
> other server, but not on such a regular basis. when the problem
> occours, the users notice a degreadtion in performance of both the
> firebired application and also other applications running on the
> server. Any suggestions?
applications. If you just run it and don't connect to it and ask it to
do some work, then it is pretty kind on resources. In other words,
there is something that Firebird is attempting to do that is using a
lot of resources. This is usually one of two things.
1) Poorly written queries / lack of supporting indices / glitches in
the optimiser can cause Firebird to execute queries in an inefficient
manner. In this case, attempt to identify the culprit query(s) and
address the underlying cause.
2) Garbage collection. Firebird 1.5 has performance issues with
Garbage collection involving indices with lots of duplicate values.
This is pretty much resolved in FB 2 (but this is still beta).
Perhaps even a transaction is left open causing massive buildups of
garbage? There is not really a lot we can tell you. You will need to
get a better grasp of the environmental factors, but I would start
with gstat -h to check on the gap between oldest and oldest active
transactions (should be well under 20000). Then look at your system at
the sorts of operations that leave lots of garbage (updating or
deleting every record in a large table with foreign key constraints
referencing a master table with relatively few records is a likely
candidate). Also try and analyse customer behaviour when the slowness
appears. Are they using a particular screen etc (this may be hard to
tell). If a pattern starts emerging though, it can give you a good
starting point.
In a preventative aspect, every query you write must be tested against
at least one large (backup copy of) production database. You should
check the query plan that will likely be chosen by the optimiser.
Adjust the query slightly to prevent it using indices with poor
selectivity. A tool like IBPlanalyzer is great for this.
Adam