Subject | Re: [firebird-support] Update: CPU Pegged on Firebird v2 |
---|---|
Author | Svein Erling Tysvaer |
Post date | 2006-12-12T09:18:28Z |
Couldn't it simply be that the optimizer chose a different plan for some
query that you run frequently? There's been lots of changes in the
optimizer and generally it should behave better than 1.5, but it isn't
difficult to find messages in this list about the optimizer choosing a
worse plan with 2.0 than with 1.5. If you generally have small tables
with a few records and your DML is simple, then this cannot be the case,
but I cannot remember to have read anything about what you actually do
that make Firebird 'max out', nor anything about what is used with your
database (events, UDFs, recursive procedures, external tables etc.).
With tables of, say, 1 million records each, and a few joins, it isn't
too difficult to write a select statement that takes days to finish. It
will be a lot more difficult to write a statement that finishes in
seconds on Firebird 1.5 and needs hours on Firebird 2.0, but it isn't
unthinkable.
Set
slalom91 wrote:
query that you run frequently? There's been lots of changes in the
optimizer and generally it should behave better than 1.5, but it isn't
difficult to find messages in this list about the optimizer choosing a
worse plan with 2.0 than with 1.5. If you generally have small tables
with a few records and your DML is simple, then this cannot be the case,
but I cannot remember to have read anything about what you actually do
that make Firebird 'max out', nor anything about what is used with your
database (events, UDFs, recursive procedures, external tables etc.).
With tables of, say, 1 million records each, and a few joins, it isn't
too difficult to write a select statement that takes days to finish. It
will be a lot more difficult to write a statement that finishes in
seconds on Firebird 1.5 and needs hours on Firebird 2.0, but it isn't
unthinkable.
Set
slalom91 wrote:
> I previously posted a message indicating my v2 Firebird installation
> had maxed out my CPU on Windows DB Server. To which, Helen replied
> and indicated she thought it may be a version issue with a mix of 1.5
> and 2.0. After giving this a shot (uninstalling 1.5, deleting
> install folder, and installing 2.0) I still ran in to the same issue.
>
> My next step was to move from a remote protocol to a local protocol
> by moving the database to the same server where the application that
> are accessing the database reside. Still no luck. By the way, in
> both these instances, local and remote, the OS was XP Pro.
>
> My next try was to move the database back to the remote protocol
> again and move to a different OS. I had a Windows 2003 machine that
> I could use temporarily, so I gave this a try. Still same issue.
>
> Finally, I decided it must have something to do with record locking,
> etc. as my environment has several simulataneous users and is also a
> multi-threaded environment. I submitted another post asking for
> suggestions on transaction parameters to which Ann replied and
> suggested that I use just concurrency and take the defaults for the
> rest. Still no luck.
>
> My final effort was to revert back to 1.5.3 by using the 1.5.3
> gbak.exe on a v2 installation. I was able to successfully get back
> to 1.5.3. Additionally, the DB server no longer locked up (Max CPU),
> but I did begin receiving some of the following errors from my client
> applications: "deadlock update conflicts with concurrent update."
>
> I was not previously receiving these errors from my 1.5.3 server
> before, but I also had different transaction parameters. One of
> which was "wait."
>
> I have to assume after all of this the following:
> 1. The default on 1.5.3 is "no wait."
> 2. v2 has a bug in reporting deadlock conflicts which causes the CPU
> to max out on Windows platforms. I have no way of testing this on
> linux, sorry.