Subject Re: [ib-support] High CPU usage AFTER bulk insert completed
Author Ann W. Harrison
At 08:05 AM 3/21/2002 +0800, Kenneth Foo wrote:

> > Are you also doing a commit after each insert? That's making things
> > slower than they need to be ... at least in most cases.
>
>Yes, I did. :-)
>
>I have a system that sends out messages and store some information in a DB
>to keep track of messages that has been sent. In case of a server crash,
>the system needs to know where it left off, thus the need for commits often.
>Basically, I was testing the average sustained rate of insertion for FB.

OK. Here's the problem - or at least one problem. Your insert affects
one data pages and one index page per index on the table. At worst,
internal bookkeeping might involve another page or two. [Actually the
worst case is somewhat worse, involving one data page for RDB$PAGES, one
page allocation page, one pointer page for RDB$PAGES, and one pointer page
for the table. With a 4K page size, that should happen once every
40,000,000,000 records or so.] Normally inserting one record changes
one page plus one page per index. Starting and committing a transaction
change two pages. You may be doubling the number of writes by using
a transaction for each record.

>But I didn't expect the processor usage to linger even after the operation
>though.
>I thought once commit has been performed, there aren't any more significant
>cpu-intensive operations to perform?

Another thing to try is releasing each compiled statement after you
commit. Use the isc_dsql_free_statement(status_vec, stmt, DSQL_drop)
call.

> > Have you traced the memory usage of the server?
>
>Well, I think it goes up till 144Mb, according to Windows Task Manager.
>This on a machine with 384Mb RAM.

Actually, what I was interested in was the memory footprint before
you began your string of updates and after.


Regards,

Ann
www.ibphoenix.com
We have answers.