Subject Re: [firebird-support] Re: INSERT Performance
Author Ann W. Harrison
robert_difalco wrote:
>
>> What operating system?
>>>
> Multiple platforms. On Window's, forced writes is off. And the records
> can be large with significant blob data.
>
>>How are you measuring I/O performance?
>
> Why? Would you expect INSERT performance to be CPU bound?

On Windows with forced writes off, yes, inserts can be CPU bound.
Windows tends not to write out the O/S file cache until the file is
closed, so there's not I/O load on a running server. Even on other
systems, inserts can be CPU bound, depending on the type of triggers and
constraints included.
>
>
>> What interface are you using?

What I was asking here is, are you using JDBC, ODBC, .NET, php, or the
C-API?

>>What Firebird architecture
>>(i.e. Classic or Superserver) How may indexes are active?
>
> Superserver. Should I be considering Classic?

It's possible. A local classic application or, on Windows, an embedded
application puts the database code and the application code in the same
process.
>
> It seems intuitive to me that if the inserts are IO bound that forcing
> them to be sequential would speed up the disk as the write head would
> not bounce around so schizophrenically.

Why do you think that a single process (i.e. SuperServer) writing fixed
length blocks at the end of a file - which is what happens with a bulk
load - is going to bounce the head around? The various clients all put
their new records into database pages created in cache. The currently
active page will have records from different clients. Pages are written
out by the server as necessary. Necessity is either that the cache is
full or that a client commits.

If there is no data page in the page cache with space, the server
allocates a new page using empty pages within the database first, then
extending the file. A major load will start by reusing all empty space,
then extend the database. Pages that are written will stay in cache
until the cache fills completely or a transaction commits. When a
transaction commits, all the pages it affected are written out, unless
they have been written previously. In the heavy load case, that will
include a lot of pages with records created by transactions that have
not yet committed.


>
> Also, it seems to me that I've gotten exceptions in the past when
> trying to deactivate indices during inserts and reactivating them
> after commit.

Err, you shouldn't be able to deactivate an index during an insert.
Changing the state of an index requires that the table not be in use.

> It's a pretty interactive application with millions of
> related records; multiple connections are doing a lot of things at a
> time. Some doing queries, etc. I think I just have to figure out how
> to make my inserts smaller and maybe minimize my BLOB usage. :)
>
Making your blobs smaller will help.

Regards,


Ann