Subject | fb_inet_server processes slowing down 2.1.3 CS (OpenSuse 10.2 64 bit) |
---|---|
Author | Stefan Sinne |
Post date | 2010-04-22T11:47:32Z |
Today we got a call from one of our customers telling us that our
application responds very slow to their input.
I told them to close the application on all clients and after this made
a proove with a select from
MON$ATTACHMENTS, where I just found one line with the current connection.
I made a select on this database, which needed 50 secs, and then
executed the same select on a copy
of the same database (which we make every night), which just needed 5 secs!
I took a look at the running processes and found 11 fb_inet_server
processes running.
After killing all of theses processes, the select needed the same 5 secs
in both databases.
I don't have any idea how these fb_inet_processes can keep alive with
all users disconnected,
nor what they are doing. The only thing I could see where the files and
libraries they where accessing (with lsof).
If this is of interest I can post them here.
This was the third time in two weeks that this happend, and it just
happens with this customer.
The difference to other instalations is that this customer scans a lot
of documents and puts them into their database,
so far about 13.000 documents with an overall size of about 22 GB.
I opened another thread in this forum two days ago ('gbak restore
without -service switch very slow for db with lots o blobs'),
where I referred to the same database, which restores very slowly under
some circumstances.
Probably the two problems are related.
Has anyone an idea of how I can get more information, for example on
what those 'ghost' fb_inet_server
processes are doing?
My idea would be to to make a crontab job that selects every minute or
so all data from mon$attachments and mon$statements
into a file, and then wait for the next performance degression.
On the other hand I could write a small script to kill all
fb_inet_server processes that are not listed in the tmp$attachments table
and run this every 10 minutes or so.
Any other ideas?
Thanks in advance,
Stefan
application responds very slow to their input.
I told them to close the application on all clients and after this made
a proove with a select from
MON$ATTACHMENTS, where I just found one line with the current connection.
I made a select on this database, which needed 50 secs, and then
executed the same select on a copy
of the same database (which we make every night), which just needed 5 secs!
I took a look at the running processes and found 11 fb_inet_server
processes running.
After killing all of theses processes, the select needed the same 5 secs
in both databases.
I don't have any idea how these fb_inet_processes can keep alive with
all users disconnected,
nor what they are doing. The only thing I could see where the files and
libraries they where accessing (with lsof).
If this is of interest I can post them here.
This was the third time in two weeks that this happend, and it just
happens with this customer.
The difference to other instalations is that this customer scans a lot
of documents and puts them into their database,
so far about 13.000 documents with an overall size of about 22 GB.
I opened another thread in this forum two days ago ('gbak restore
without -service switch very slow for db with lots o blobs'),
where I referred to the same database, which restores very slowly under
some circumstances.
Probably the two problems are related.
Has anyone an idea of how I can get more information, for example on
what those 'ghost' fb_inet_server
processes are doing?
My idea would be to to make a crontab job that selects every minute or
so all data from mon$attachments and mon$statements
into a file, and then wait for the next performance degression.
On the other hand I could write a small script to kill all
fb_inet_server processes that are not listed in the tmp$attachments table
and run this every 10 minutes or so.
Any other ideas?
Thanks in advance,
Stefan