Subject | Forces writes and transaction throughput? |
---|---|
Author | Kjell Rilbe |
Post date | 2008-12-03T09:15:32Z |
Hi,
Running FB 2.1 (WI-V2.1.1.17910) on Windows 32 bit, superserver.
According to general recommendation I have forced writes on, which works
fine in my applications, but not so good when importing data using MS
(lousy) import/export wizard, the one that comes with SQL Server 2000.
The problem is that this tool seems to import each record in its own
transaction. This causes FB to do a lot of disk I/O in a manner that
makes the disk head jump back and forth A LOT. I could record the sound
and use it for motorcycle sound effects in a movie. ;-) And it's SLOW,
VERY SLOW.
But as far as I've understood, FB is being used in several places in
systems with thousands of users. Is this thousands of SIMULTANEOUS
users? In that case, how do you configure FB to handle a high
transaction/commit rate without this "motorcycle" effect (thrashing disk
head)?
I mean, with many many users, supposedly one of FB's strong points
considering it's locking mechanisms, there's bound to be a lot of
transactions. I can't really believe that the engine is limited to such
a low count of commits/second as I'm seing during this batch import.
NOTE: I KNOW batch imports should not be done this way, but using a
commit per N records where N is, perhaps 100 or 1000 or something. I am
NOT asking how to improve batch import performance, because I already
know that. It's the commit rate capacity I'm wondering about.
Kjell
--
--------------------------------------
Kjell Rilbe
DataDIA AB
E-post: kjell@...
Telefon: 08-761 06 55
Mobil: 0733-44 24 64
Running FB 2.1 (WI-V2.1.1.17910) on Windows 32 bit, superserver.
According to general recommendation I have forced writes on, which works
fine in my applications, but not so good when importing data using MS
(lousy) import/export wizard, the one that comes with SQL Server 2000.
The problem is that this tool seems to import each record in its own
transaction. This causes FB to do a lot of disk I/O in a manner that
makes the disk head jump back and forth A LOT. I could record the sound
and use it for motorcycle sound effects in a movie. ;-) And it's SLOW,
VERY SLOW.
But as far as I've understood, FB is being used in several places in
systems with thousands of users. Is this thousands of SIMULTANEOUS
users? In that case, how do you configure FB to handle a high
transaction/commit rate without this "motorcycle" effect (thrashing disk
head)?
I mean, with many many users, supposedly one of FB's strong points
considering it's locking mechanisms, there's bound to be a lot of
transactions. I can't really believe that the engine is limited to such
a low count of commits/second as I'm seing during this batch import.
NOTE: I KNOW batch imports should not be done this way, but using a
commit per N records where N is, perhaps 100 or 1000 or something. I am
NOT asking how to improve batch import performance, because I already
know that. It's the commit rate capacity I'm wondering about.
Kjell
--
--------------------------------------
Kjell Rilbe
DataDIA AB
E-post: kjell@...
Telefon: 08-761 06 55
Mobil: 0733-44 24 64