Subject | Re: INSERT Performance |
---|---|
Author | robert_difalco |
Post date | 2005-06-23T22:56:44Z |
>For us, even with forced writes off they seem to be IO bound.
> On Windows with forced writes off, yes, inserts can be CPU bound.
>
Basically, each thread handles a batch of updates and inserts that end
with a commit. With forced writes on, this seems to be faster if
everything is forced into a single thread (i.e. syncrhonous commits).
With forced writes off, it seems the best performing number of threads
is around 4-5. Moving beyond 5 in the thread pool and performance
begins to degrade again on IO waits.
>Oh, sorry. I'm using JDBC (Jaybird).
> What I was asking here is, are you using JDBC, ODBC,
> .NET, php, or the C-API?
>
>Embedded isn't really an option. Our server does a lot of work and
> It's possible. A local classic application or, on Windows, an
> embedded application puts the database code and the application
> code in the same process.
>
profiling showed the system to be more performant and scalable when
the database server and our server are on different machines.
>I don't think that will. But I think streaming to BLOB handles and
> Why do you think that a single process (i.e. SuperServer)
> writing fixed length blocks at the end of a file - which is
> what happens with a bulk load - is going to bounce the
> head around?
>
concurrent commits to many related tables and indices would. But I
don't know the system like you do.
>That explains why that didn't work well for me. :)
> Err, you shouldn't be able to deactivate an index during an insert.
> Changing the state of an index requires that the table not be in
> use.
>
>Yup that seems to make a big difference. And for some reason
> Making your blobs smaller will help.
>
converting blobs that have a fixed upper limit to OCTET VARCHAR fields
also seems to make a big difference.
As always thansk for your insights...
R.