Subject | Re: Should updates be this slow? |
---|---|
Author | inoffensive_2006 |
Post date | 2007-09-05T05:41:41Z |
--- In firebird-support@yahoogroups.com, Pavel Cisar <pcisar@...> wrote:
I'm trying to understand if my code is performing as expected.
5 times slower seems like more than "little bit slower".
After defining the BIGINT column I want to update as NOT NULL
and giving it an initial fill of zero, the performance improved
to less than 5 seconds to update the sort order, 1690 rows per
second. It's now 5 times slower than a SELECT of a BIGINT key
and a VARCHAR.
I'm not sure what a "parametrized update statement" is, but I
believe the SQL is being compiled each time.
I'm using dynamic SQL and each row involves defining a new
IBPP statement with IBPP's StatementFactory().
This is an example of the SQL that's run for each row.
UPDATE CON
SET
CON_ALPHA_SORT_ORDER = '1130038230440428'
WHERE
CON_KEY = '2301';
Each sort order will have a unique value.
Does this SQL lend itself to parameterization?
I noticed I was calling IBPP's Prepare() for this SQL,
but since I'm not re-using the statement I've removed the
call to Prepare(), and improved the performance 9%. I
only noticed that because of what you are asking. Thanks.
row's key and user's full name columns, these are placed in
an STL vector and sorted, and the gap between each row's
sort order is calclated. All of the indexes for the table
are altered to inactive, the updates of the sort order are
run on each row. All of the indexes are altered back to
active, and the transaction is committed.
I appreciate the time you are taking to help me.
Thanks
Larry
>Currently down to just under 5 seconds for the update.
> inoffensive_2006 wrote:
> >
> > It takes a little more than a second to read all 8000 of
> > the records, and 24 seconds to update them.
> > I'm just curious about the speed of the updates. And I'mThanks Pavel:
> > wondering if I'm doing something that makes them that slow.
>
> Depends. Updates (and deletes) are always a little bit slower than
> inserts and inserts are (typically much more) slower than selects, so
> yes, there definitely will be performance difference between both
> operations, but how much it's slower depends on many factors.
I'm trying to understand if my code is performing as expected.
5 times slower seems like more than "little bit slower".
After defining the BIGINT column I want to update as NOT NULL
and giving it an initial fill of zero, the performance improved
to less than 5 seconds to update the sort order, 1690 rows per
second. It's now 5 times slower than a SELECT of a BIGINT key
and a VARCHAR.
> 1) do you use parametrized update statement, or do you compile iteach time?
I'm not sure what a "parametrized update statement" is, but I
believe the SQL is being compiled each time.
I'm using dynamic SQL and each row involves defining a new
IBPP statement with IBPP's StatementFactory().
This is an example of the SQL that's run for each row.
UPDATE CON
SET
CON_ALPHA_SORT_ORDER = '1130038230440428'
WHERE
CON_KEY = '2301';
Each sort order will have a unique value.
Does this SQL lend itself to parameterization?
I noticed I was calling IBPP's Prepare() for this SQL,
but since I'm not re-using the statement I've removed the
call to Prepare(), and improved the performance 9%. I
only noticed that because of what you are asking. Thanks.
> 2) Do you commit each update or (whole) batch?The transaction is started. A SELECT is run for each
row's key and user's full name columns, these are placed in
an STL vector and sorted, and the gap between each row's
sort order is calclated. All of the indexes for the table
are altered to inactive, the updates of the sort order are
run on each row. All of the indexes are altered back to
active, and the transaction is committed.
> 3) Do you have index defined on updated field(s) ?They are inactive when running the updates.
I appreciate the time you are taking to help me.
Thanks
Larry