Subject | Re: [firebird-support] Re: Scale or tune firebird? |
---|---|
Author | Dalton Calford |
Post date | 2004-11-11T03:26:30Z |
Hi Robert,
You have multiple factors working against you.
1.) Speed of the machine processing the inserts
2.) Optimization of the code performing the inserts (preparing the
statements ahead and reusing the prepared statement etc)
3.) Speed and congestion of the local network/remote network
4.) Index's, Keys, Domains and Triggers on the destination table.
Steps 1,2 and 3 are outside the current discussion, but you should
evaluate and address them if deamed appropriate.
When performing large numbers of inserts, a intermediary table is
valuable. That table should have no indexes, keys, triggers or any
other business rules applied to it.
After the insert has occured. You use a predefined stored procedure to
move the data from the intermediary table into the final table. I
suggest you create this stored procedure as a selectable procedure, that
returns one or more rows detailing the status of the data insert. For
example you can suspend every hundred rows so that the client knows what
is happening during the data move. Since all the work and verification
is occuring on the server, you can make your application appear more
responsive to the client, have server side bots perform the data move,
or have multiple machines enter data without worry about contention
because the data move procedure deals with the on error conditions and
reports them back to the calling client or writes them out to a log and
continues with the next line of work.
To speed up the work with the inserts, use ram disks large enough to
hold some of your indexs, or disable the index during the insert until
after the insert is complete.
We can go into a long discussion about hardware optimization, data
verification and normalization as well as optimum data storage methods,
but, we can hold off for another day.
best regards
Dalton
robertgilland wrote:
You have multiple factors working against you.
1.) Speed of the machine processing the inserts
2.) Optimization of the code performing the inserts (preparing the
statements ahead and reusing the prepared statement etc)
3.) Speed and congestion of the local network/remote network
4.) Index's, Keys, Domains and Triggers on the destination table.
Steps 1,2 and 3 are outside the current discussion, but you should
evaluate and address them if deamed appropriate.
When performing large numbers of inserts, a intermediary table is
valuable. That table should have no indexes, keys, triggers or any
other business rules applied to it.
After the insert has occured. You use a predefined stored procedure to
move the data from the intermediary table into the final table. I
suggest you create this stored procedure as a selectable procedure, that
returns one or more rows detailing the status of the data insert. For
example you can suspend every hundred rows so that the client knows what
is happening during the data move. Since all the work and verification
is occuring on the server, you can make your application appear more
responsive to the client, have server side bots perform the data move,
or have multiple machines enter data without worry about contention
because the data move procedure deals with the on error conditions and
reports them back to the calling client or writes them out to a log and
continues with the next line of work.
To speed up the work with the inserts, use ram disks large enough to
hold some of your indexs, or disable the index during the insert until
after the insert is complete.
We can go into a long discussion about hardware optimization, data
verification and normalization as well as optimum data storage methods,
but, we can hold off for another day.
best regards
Dalton
robertgilland wrote:
>While I don't have any answers for you.
>I do have a similar problem.
>We are doing huge amounts of transactions on a GDB.
>In a single process.
>We need to know how to fine tune our hardware,
>to produce the best results possible for a single
>high injection process into a Firebird 1.5.1 database.
>
>Regards,
>
>Robert.
>
>
>
>
>
>
>
>Yahoo! Groups Links
>
>
>
>
>
>
>
>
>
>