Subject | Re: [firebird-support] Embedded Firebird Performance |
---|---|
Author | Helen Borrie |
Post date | 2005-04-10T23:24:12Z |
At 05:55 PM 10/04/2005 -0400, you wrote:
you have a 4K page size, you are reserving > 245 Mb of RAM just for
buffering pages. In practical terms, ~ 8000 pages is about the maximum
sensible buffer size for Superserver *when multiple connections are sharing
the cache*. The default 2048 pages should be more than adequate for a
stand-alone setups.
Increasing the buffer size has no effect on inserts, since the page buffers
are not read for inserts. For poor insert performance you need to look at
the number of inserts per transaction (break these into chunks of 7-10,000)
and the geometry of indexes.
As for disabling forced writes, this makes no sense at all on a single-user
machine. It's going to place a high risk on database integrity if you're
over-taxing machine resources (as you are with this crazy buffer setting).
inserts. Where are you reading the data from? On a single-user setup, the
machine resources are in heavy competition with one another. Inserts use
RAM; and the more inserts you stuff into a single transaction, the more
RAM gets used up during that transaction. That's because every uncommitted
insert gets mirrored by an entry in RAM that the engine can refer to in
order to delete the uncommitted records in case the transaction is rolled
back. This structure gets cleared when the insert operation is hard-committed.
What about other operations? Poor performance generally boils down to a
few things, the least significant of which would be whether forced writes
were on or the page buffers were too low.
For example, building long chains of duplicate indexes will cause inserts
to get progressively slower and will cripple searches for your selects and
updates. You get these long chains when you have indexes (including
foreign key indexes) on columns that have few possible values in a large
table. (gstat -i will provide you with the index stats).
A way of speeding up large bulk inserts is to deactivate the indexes (ALTER
INDEX BLAH INACTIVE) for the duration and then reactivate them
afterwards. That also has the advantage of balancing the indexes. Still,
if it's a poor FK index that is causing the problem, you can't deactivate
that without dropping the actual constraint. You'd need to consider how
much you really need that foreign key and possibly replacing it with custom
triggers to protect RI.
./hb
>I have recently completed a port of my application from MSDE to Firebird.Setting buffers to 60K won't improve performance, it will degrade it. If
>Everything is working now, but inserts are taking a long time.
>
>I have tried using -w async and setting buffers to 60000 to improve
>performance,
you have a 4K page size, you are reserving > 245 Mb of RAM just for
buffering pages. In practical terms, ~ 8000 pages is about the maximum
sensible buffer size for Superserver *when multiple connections are sharing
the cache*. The default 2048 pages should be more than adequate for a
stand-alone setups.
Increasing the buffer size has no effect on inserts, since the page buffers
are not read for inserts. For poor insert performance you need to look at
the number of inserts per transaction (break these into chunks of 7-10,000)
and the geometry of indexes.
As for disabling forced writes, this makes no sense at all on a single-user
machine. It's going to place a high risk on database integrity if you're
over-taxing machine resources (as you are with this crazy buffer setting).
>so my first question is: Do these settings (using gfix) workYes. The embedded server is just superserver.
>for embedded Firebird.
>And what else can I do to improve performance?It would be interesting to know the statements you are using for the
inserts. Where are you reading the data from? On a single-user setup, the
machine resources are in heavy competition with one another. Inserts use
RAM; and the more inserts you stuff into a single transaction, the more
RAM gets used up during that transaction. That's because every uncommitted
insert gets mirrored by an entry in RAM that the engine can refer to in
order to delete the uncommitted records in case the transaction is rolled
back. This structure gets cleared when the insert operation is hard-committed.
What about other operations? Poor performance generally boils down to a
few things, the least significant of which would be whether forced writes
were on or the page buffers were too low.
For example, building long chains of duplicate indexes will cause inserts
to get progressively slower and will cripple searches for your selects and
updates. You get these long chains when you have indexes (including
foreign key indexes) on columns that have few possible values in a large
table. (gstat -i will provide you with the index stats).
A way of speeding up large bulk inserts is to deactivate the indexes (ALTER
INDEX BLAH INACTIVE) for the duration and then reactivate them
afterwards. That also has the advantage of balancing the indexes. Still,
if it's a poor FK index that is causing the problem, you can't deactivate
that without dropping the actual constraint. You'd need to consider how
much you really need that foreign key and possibly replacing it with custom
triggers to protect RI.
./hb