Subject Re: [firebird-support] Server Optimization
Author Helen Borrie
At 09:58 PM 5/08/2004 -0500, you wrote:
>I am currently running Firebird 1.5.1 SuperServer on my "worst case
>scenario" development workstation:
>
>Dell Optiplex
>Pentium II 333MHz
>128MB RAM
>2GB IDE Hard Drive
>10MB LAN Card
>Windows XP Professional
>
>During idle times it has approximately 41MB of free Physical memory
>available.
>
>OK, now for the question ... I have been testing around with the Database
>Workbench tool and was able to insert the same number of records (100,000
>records committing 10,000 at a time) in 1min 45secs. Now that doesn't seem
>too bad, but catch this.
>
>One of my other development workstation is as follows:
>
>Pentium 4 1.7GHz
>256MB RAM
>80GB IDE Hard Drive
>100MB LAN Card
>Windows XP Professional
>
>During idle times it has approximately 90MB of free Physical memory
>available.
>
>The same insert took 1min 28secs on this much beefier machine.
>
>I did some preliminary research online and turned off Forced Writes on both
>of the machines and it improved the insert by about 5 seconds on each. Is
>this something that is to be expected? I'm simply looking for the fastest
>data access and manipulation possible.
>
>Any help would be greatly appreciated!

A few thoughts, no magic bullets:

A missing piece in this description is the network protocol you're using.
If it's Win32 Named Pipes in both cases, then the noise-to-work ratio won't
be much different in either case. TCP/IP ought to be optimal in both
cases, especially if you can exclude Windows workgroup services from the
equation. Other inhibitory factors aside, the 100 Mb card would be expected
to dispatch 10,000 requests faster across an unfettered TCP/IP
connection. If you were using Named Pipes, it's at least worth a try to
compare the performance when using TCP/IP.

One thing worth investigating is whether you're requesting unnecessary
prepares. Prepare survives a commit. Parameterise the insert statement
and prepare once for all batches.

Given the low memory on both systems, you might like to experiment with a
smaller batch size between commits, to reduce the size of the auto-undo log
(an in-memory structure) per batch.

For extreme, process-specific tuning, you might also consider taking the
users offline and disabling the indexes on the target table.

With forced writes off, the newer 80Gb disk might give an advantage at
write time because it's probably churning at 7200 rpm rather than 5400 or
3600 as the old 2 Gb one is; on the memory side, the disk-caching isn't
likely to gain you much because both systems have relatively low RAM and
would be likely to start swapping during a batch, anyway. Writing the same
data to disk twice doesn't make sense, especially if your system's memory
cache is on the same physical disk as the database. Both would tend to be
slower, and the system with less RAM and a slower, less capacious disk
would exhibit more slowing.

/heLen