Subject | RE: [IBO] Long batch |
---|---|
Author | Kaputnik |
Post date | 2001-12-05T07:15:10Z |
> -----Original Message-----long
> From: Renato Martins [mailto:renato@...]
> Sent: Wednesday, December 05, 2001 7:14 AM
> To: ibobjects@yahoogroups.com
> Subject: [IBO] Long batch
>
> I´ve got a query which updates a table, record by record. It tooks a
> time to run (with a local server, it takes about 14 minutes; I´ve gotThats definitely too long
> about
> 1000 records, which get their values through lots of queries...).
>
> I´ve noticed a great disk activity in the process. It seems that everyLet's see...TFields are only available in the TIBO-Components. Are you
> update I make (by setting a TField's value), is written to disk.
>
doing the updates with them?
> Two questions:I
> 1. Is there a way to apply the whole update only at the end of the
> job,
> and do the work "silently" (I couldn´t create an SP to do the job, the
> values are obtained from really complex conditions...)?? I mean, as if
> wasWell, almost nothing is too complex to not be used in a Stored
> using TClientDataSet, but without the record fetching...
Procedure.
Take the complex queries and put them into SP's. Select the Result of
the SP and do only the additional calculations which are not directly
related to SQL into your app.
> BTW, I am already using TIB_Cursor, and TIB_Query, with a TIB_DSQLhere
> andand
> there.
>
> 2. If I update a record (by putting it in edit mode through code,
> changing their TField values), doesn´t call Post, and then I move toUse exactly one IB_Query or IB_Cursor to update the statements. For all
> another
> record, is it going to be updated????
>
other SQL-Stuff use separate queries/cursors.
Start a transaction manually for the update, and use a separate
transaction for all the other stuff. Don't set the update-query into
edit-mode until you have done all calculations and update it in the last
possible moment. After that, move to the next (after posting) and start
from the beginning.
Your disc-activities are not resulting from saving the record (this is a
very cheap operation) but from the additional continuous fetching of
rows in the other queries. Remember a few things:
In the additional queries, only fetch the rows you really need, and only
put the columns you really need into the select-statement. Use proper
where-clauses to reduce the result-set instead of selecting them plain
and looping through the result-set manually. If you need calculations
for master-detail-datasets, put them into SP's to let the Server do the
fetching of rows internally (exreme performance-win) and only pull the
result-set into the client. Do never ever use locate or filter to find a
row, do it via a prepared query with a where-clause and parameters. See
if you have optimized the SQL you are using (proper use of joins where
possible, sub-optimizations with inline-selects, not issuing the same
SQL several times, proper where-clauses, proper use of ORDER BY to get
the relevant row as the guaranteed top-most one and so on)
I have a database, where I have 50.000 test-customers and 250.000
test-orders with about 2.000.000 test-order-items. If I change one
order-item, a trigger will call an SP and this will update the sum of
the order-item by calculating single-price*item-count*VAT-discount
(which is calculated again dynamically by volume-conditions entered in
for the article)-another fix discount, then the total-sum of the order
and also the total revenue generated with this customers by summing the
totals of all orders per customer for the affected customer. The whole
operation needs roughly half of a second, and manually changing the
item-count of roughly 100.000 order-items with an update-statement with
where-clause took me roughly half a minute. This is done with NO code on
the client.
If you could describe the case you're doing, the members of this list
could perhaps come up with a pretty good solution taking up much less
time to complete....
CU, Nick