Subject Re: INSERT Performance
Author robert_difalco
>
> On Windows with forced writes off, yes, inserts can be CPU bound.
>

For us, even with forced writes off they seem to be IO bound.
Basically, each thread handles a batch of updates and inserts that end
with a commit. With forced writes on, this seems to be faster if
everything is forced into a single thread (i.e. syncrhonous commits).
With forced writes off, it seems the best performing number of threads
is around 4-5. Moving beyond 5 in the thread pool and performance
begins to degrade again on IO waits.

>
> What I was asking here is, are you using JDBC, ODBC,
> .NET, php, or the C-API?
>

Oh, sorry. I'm using JDBC (Jaybird).

>
> It's possible. A local classic application or, on Windows, an
> embedded application puts the database code and the application
> code in the same process.
>

Embedded isn't really an option. Our server does a lot of work and
profiling showed the system to be more performant and scalable when
the database server and our server are on different machines.

>
> Why do you think that a single process (i.e. SuperServer)
> writing fixed length blocks at the end of a file - which is
> what happens with a bulk load - is going to bounce the
> head around?
>

I don't think that will. But I think streaming to BLOB handles and
concurrent commits to many related tables and indices would. But I
don't know the system like you do.

>
> Err, you shouldn't be able to deactivate an index during an insert.
> Changing the state of an index requires that the table not be in
> use.
>

That explains why that didn't work well for me. :)

>
> Making your blobs smaller will help.
>

Yup that seems to make a big difference. And for some reason
converting blobs that have a fixed upper limit to OCTET VARCHAR fields
also seems to make a big difference.

As always thansk for your insights...

R.