Subject | Re: [ib-support] FB slow |
---|---|
Author | Helen Borrie |
Post date | 2002-07-15T14:20:27Z |
At 03:09 PM 15-07-02 +0200, Lele Gaifax wrote:
commenting. In that case, the poster had complained of slowness when he
first updated 30,000 rows and then "updated" the same 30,000 rows again in
the same transaction, without committing the first round of changes. It is
hard to convince this gentleman that it is an unsound test he is performing...
get updated. You would add the overhead of shunting rows from the temp
table to the real table, whereas there is no penalty to performing the
updates directly, in a short, atomic transaction.
In any case, with the multi-versioning architecture, you already
(effectively) have "temporary tables" being created for the new versions of
the affected rows, i.e. the rowset of new record versions generated from
the update.
Of course, if you have multiple users all hitting the same target table
simultaneously with updates to the same batches of rows, it will be
a >>concurrency bottleneck<< that you will want to work around...but you
won't have the CPU-eating effects the previous poster described from having
his transaction attempt to convert its own uncommitted record versions to
deltas.
heLen
All for Open and Open for All
Firebird Open SQL Database · http://firebirdsql.org ·
http://users.tpg.com.au/helebor/
_______________________________________________________
>Sigh! So having some SPs that computes the various fields of a tableLele, your scenario is not similar to the one on which Ivan was
>(it's a statistic, so each field comes from a summary of other tables)
>is calling for long response times?
commenting. In that case, the poster had complained of slowness when he
first updated 30,000 rows and then "updated" the same 30,000 rows again in
the same transaction, without committing the first round of changes. It is
hard to convince this gentleman that it is an unsound test he is performing...
>As usual, such tables have the general form (TheDay DATE,Yes, but each time it does this task, the work is committed, right?
>TheKey1...TheKeyN INTEGER, Value1..ValueM DOUBLE PRECISION), and there
>are, depending on `N', from 500 to 10k records for a single day. My
>set of SPs, every 10 minutes of so, has to collect data from other
>tables a precompute various summaries, updating and inserting new
>records.
>Since I'm trying very hard to squeeze out the time it takes to updateNo. No matter what you do, there is a point where the real table has to
>the tables, would be a better move to use a temp table to do the
>calculations and then copy its content into the real table?
get updated. You would add the overhead of shunting rows from the temp
table to the real table, whereas there is no penalty to performing the
updates directly, in a short, atomic transaction.
In any case, with the multi-versioning architecture, you already
(effectively) have "temporary tables" being created for the new versions of
the affected rows, i.e. the rowset of new record versions generated from
the update.
Of course, if you have multiple users all hitting the same target table
simultaneously with updates to the same batches of rows, it will be
a >>concurrency bottleneck<< that you will want to work around...but you
won't have the CPU-eating effects the previous poster described from having
his transaction attempt to convert its own uncommitted record versions to
deltas.
heLen
All for Open and Open for All
Firebird Open SQL Database · http://firebirdsql.org ·
http://users.tpg.com.au/helebor/
_______________________________________________________