Subject Re: [ib-support] FB slow
>>>>> On Tue, 16 Jul 2002 00:20:27 +1000, Helen Borrie <helebor@...> said:

HB> At 03:09 PM 15-07-02 +0200, Lele Gaifax wrote:
>> Sigh! So having some SPs that computes the various fields of a
>> table (it's a statistic, so each field comes from a summary of
>> other tables) is calling for long response times?

HB> Lele, your scenario is not similar to the one on which Ivan
HB> was commenting. In that case, the poster had complained of
HB> slowness when he first updated 30,000 rows and then "updated"
HB> the same 30,000 rows again in the same transaction, without
HB> committing the first round of changes. It is hard to convince
HB> this gentleman that it is an unsound test he is performing...

>> As usual, such tables have the general form (TheDay DATE,
>> TheKey1...TheKeyN INTEGER, Value1..ValueM DOUBLE PRECISION),
>> and there are, depending on `N', from 500 to 10k records for a
>> single day. My set of SPs, every 10 minutes of so, has to
>> collect data from other tables a precompute various summaries,
>> updating and inserting new records.

HB> Yes, but each time it does this task, the work is committed,
HB> right?

In pseudo-code, my procs are:

def main()
start transaction
end transaction

def reset_incremental_values()
update A set inc_field1 = 0, inc_field2 = 0 where TheDay='TODAY'
update B set inc_field1 = 0, inc_field2 = 0 where TheDay='TODAY'

def update_A()
foreach key in modified_keys(A)
value = compute_summary_field1(key)
update A set field1 = value where rdb$key = key

value = compute_summary_field2(key)
update A set field2 = value where rdb$key = key
end for

def update_B()
foreach key in modified_keys(B)
value = compute_summary_field1_maybe_using_table_A(key)
update B set field1 = value where rdb$key = key

value = compute_summary_field2_maybe_using_table_A(key)
update B set field2 = value where rdb$key = key
end fo

and so on. So, from a certain POV, my case *is* similar to the
original poster's one. I'm, in a single transaction, updating the same
row of each table multiple times, although generally a different field
each time. I usually use rdb$key to do the task.

Of course, when possible I *do* UPDATE multiple fields at
once. Reworking those SPs that do not would require some effort, but
I'd "gladly" give it a try if there a concrete chance of going faster!

HB> Of course, if you have multiple users all hitting the same
HB> target table simultaneously with updates to the same batches
HB> of rows, it will be a >>concurrency bottleneck<< that you will
HB> want to work around...but you won't have the CPU-eating
HB> effects the previous poster described from having his
HB> transaction attempt to convert its own uncommitted record
HB> versions to deltas.

No, this is not the case. Only one instance of the "main" SP above at
a time may be updating those tables.

thanx a lot,
bye, lele.
nickname: Lele Gaifax | Quando vivro' di quello che ho pensato ieri
real: Emanuele Gaifas | comincero' ad aver paura di chi mi copia.
email: lele@... | -- Fortunato Depero, 1929.