Subject Re: [ib-support] High CPU usage AFTER bulk insert completed
Author Helen Borrie
At 12:01 PM 20-03-02 +0800, Kenneth wrote:
>Hi
>
>This is getting interesting :-)
>
>I tried with forced writes=true, and still get the same results.
>I did 2000 item inserts, and after that processor usage lingers at 100% for
>about 25 seconds.
>This on a Athlon 900MHz system with 384MB RAM.
>
>Insertion too, was a bit slow. (somewhere about 50 items per second).

Didn't you say you were writing 4 blobs with each insertion?


>Btw, not sure if this info is useful, but my code code creates a prepared
>statement
>everytime i perform the insertion.

That would certainly slow you down.

>(This limitation is unfortunately, due to
>the fact the
>Type4 FB JDBC driver doesn't support reusing a statement after commits)

What you want is to
start the transaction
prepare the statement with input parameters
for each insert do iterate
(
get input parameters
pass input parameters
perform and post an insert (--> writes pending version row)
)
commit the transaction (-->moves new rows into database pages)

i.e. you prepare statements (once is enough if the statement doesn't change
except for the values()
you don't commit statements, you commit transactions...

>Just wondering if this has something to do with some internal memory
>management?

FB will do what you ask. If you ask it to sweat and strain, it will sweat
and strain.

>But then again, it's just 2000 prepared statements. It can't possibly take
>25 seconds to
>free them all right?

It should be one prepared statement, not 2000.
Time is being gobbled by (totally unnecessary) cleanup of 2000 transactions!
Committing will take some CPU time, of course - i/o, writing 8000 blobs out
to blob pages, updating indexes, cleanup.

Page_size can affect how long bulk inserts take, as well.

cheers,
Helen

All for Open and Open for All
Firebird Open SQL Database · http://firebirdsql.org ·
http://users.tpg.com.au/helebor/
_______________________________________________________