Subject | Re: [firebird-support] FB 2.0 50% CPU |
---|---|
Author | Helen Borrie |
Post date | 2007-04-25T02:49:30Z |
At 09:30 AM 25/04/2007, you wrote:
instrument. You haven't provided any info about what you're using
this query for but I'm supposing you have the default ODBC
transaction setting read/write. That keeps every record in ATABLE
"interesting" to the Firebird server for as long as (in Access terms)
the table is open. So it's likely you are building up a mega-pile of
garbage if you have users updating or deleting stuff.
spreadsheets or local physical files to which they have exclusive access.
an interactive read/write application with multiple users then you
are stuck with it.
Firebird is not a desktop database so any client application is
sharing resources with all other client applications, as well as with
the internal processing that database engine does that makes Firebird
a multi-user engine. There is this concept called a transaction that
is designed to protect the database from conflicting
operations. Using the Access interface (which doesn't have
transactions) and ODBC (which hides transactions from you) doesn't
magically convert your Firebird database into an Access database.
So - the Access/ODBC interface is providing you with all kinds of
ways to cripple performance on a multi-user system. Your
table-driven approach will clog up the network and crowd the server's
memory with buffers that won't be emptied until the application has
fetched the last record....it's also about the fattest interface
layer known to mankind, as well as one of the most dumbed-down.
Then, with Firebird's multi-version architecture (MVA) your
desktop-style solution is likely to build up garbage to levels that
can't be ignored. You should be monitoring statistics and would
almost certainly need to be running sweeps regularly. This won't be
a magic bullet, given that your perpetual long-running R/W
transactions will trap a lot of the garbage, but it might help to
extend the mileage between your server restarts.
You can fix this but there is really no free lunch. ODBC doesn't
support multiple transactions within a single connection so you don't
have the means to provide a live, read-only view of key values as a
selector. This means that you will need to create a selector
structure in the client by querying the essential keys of the table
and storing them locally, before the user starts doing DML requests
on selected records. Make sure you hard-commit after fetching this
data!! If there is a lot of DML happening in the system, you'll have
to re-run this routine sufficiently often to ensure that the client's
off-line view doesn't get too far out of synch with database state.
Limit read-write datasets to the minimum number of records via a
WHERE clause, and hard-commit often. One record is ideal; 200
records is too many. More than that is plain daft. Never use
soft-commits (Commit Retaining) in this transaction-unaware
environment - even though I suspect that Commit Retaining comes as
the default for most ODBC drivers.
All that said, 50% peak cpu usage isn't excessive. Most people
complain that Firebird doesn't seem to use *enough* cpu time! I
really don't think cpu availablility is your main problem here at all
- unless of course Firebird is sharing the machine with a lot of
other services. In that case, it's probably competing unfavourably
for other resources as well. It looks much more like a garbage
disposal problem to me.
./heLen
>My FB 2.0 installation works great, most of the time. About twice aNot in Fb 2.0.x and below.
>week, the FB service on the server grabs the CPU and pins it at 50% or
>more, slowing down my whole system. It stays in this condition for
>hours until I stop the FB service and restart, then everything is OK
>again for a few days. There are 2 things I need to know:
>
>1) Is there a way to tell FB to stop doing the runaway transaction?
>2) Besides a badly written query, could Microsoft Access using aDefinitely. Don't use table objects in a client/server environment.
>Table object over ODBC cause this same problem?
>My understanding ofYep. It's a SELECT * FROM ATABLE query. ODBC is a very blunt
>MS Access tables is that if you "Open" the table, it will attempt to
>pull back every single record in the FB table.
instrument. You haven't provided any info about what you're using
this query for but I'm supposing you have the default ODBC
transaction setting read/write. That keeps every record in ATABLE
"interesting" to the Firebird server for as long as (in Access terms)
the table is open. So it's likely you are building up a mega-pile of
garbage if you have users updating or deleting stuff.
>Rebooting to stop a runaway process is bad for business...as is giving users applications that think database tables are
spreadsheets or local physical files to which they have exclusive access.
>Thanks in advance for any help you can provide.If you are using Access and a table-driven model as the front end to
an interactive read/write application with multiple users then you
are stuck with it.
Firebird is not a desktop database so any client application is
sharing resources with all other client applications, as well as with
the internal processing that database engine does that makes Firebird
a multi-user engine. There is this concept called a transaction that
is designed to protect the database from conflicting
operations. Using the Access interface (which doesn't have
transactions) and ODBC (which hides transactions from you) doesn't
magically convert your Firebird database into an Access database.
So - the Access/ODBC interface is providing you with all kinds of
ways to cripple performance on a multi-user system. Your
table-driven approach will clog up the network and crowd the server's
memory with buffers that won't be emptied until the application has
fetched the last record....it's also about the fattest interface
layer known to mankind, as well as one of the most dumbed-down.
Then, with Firebird's multi-version architecture (MVA) your
desktop-style solution is likely to build up garbage to levels that
can't be ignored. You should be monitoring statistics and would
almost certainly need to be running sweeps regularly. This won't be
a magic bullet, given that your perpetual long-running R/W
transactions will trap a lot of the garbage, but it might help to
extend the mileage between your server restarts.
You can fix this but there is really no free lunch. ODBC doesn't
support multiple transactions within a single connection so you don't
have the means to provide a live, read-only view of key values as a
selector. This means that you will need to create a selector
structure in the client by querying the essential keys of the table
and storing them locally, before the user starts doing DML requests
on selected records. Make sure you hard-commit after fetching this
data!! If there is a lot of DML happening in the system, you'll have
to re-run this routine sufficiently often to ensure that the client's
off-line view doesn't get too far out of synch with database state.
Limit read-write datasets to the minimum number of records via a
WHERE clause, and hard-commit often. One record is ideal; 200
records is too many. More than that is plain daft. Never use
soft-commits (Commit Retaining) in this transaction-unaware
environment - even though I suspect that Commit Retaining comes as
the default for most ODBC drivers.
All that said, 50% peak cpu usage isn't excessive. Most people
complain that Firebird doesn't seem to use *enough* cpu time! I
really don't think cpu availablility is your main problem here at all
- unless of course Firebird is sharing the machine with a lot of
other services. In that case, it's probably competing unfavourably
for other resources as well. It looks much more like a garbage
disposal problem to me.
./heLen