Subject »Ø¸´£º [firebird-support] Re: About update clause's time.
Author Adam
--- In firebird-support@yahoogroups.com, Svein Erling Tysvær
<svein.erling.tysvaer@k...> wrote:
> Hmm, using 'where Bfield > Bvalue' means that the primary key cannot
> be used in the plan. Possibly a descending index could be used (I
> don't know how much Arno has improved this bit since 1.0), but the
> primary key creates an ascending index, so without the descending
> index the query has to go natural...

Are you sure Set? Definitely an ascending index does not help for a
select max query so FB does not use it in that case, but I think it
does use it for < and > operations (unfortunately I can not test it at
this moment). Unless this is new in FB 1.5 and did not exist in 1.x?

>
> Still, your table is small (300000 is peanuts) and using two minutes
> on a query updating an integer field for 27000 records based on the
> primary key sounds excessively long. Is there something you haven't
> told us or is it just me not being used to updates (more often I use
> insert)?

Well in MGA, an update is the same as an insert (the old record is
left in place and the new one becomes a delta). Unfortunately I do not
have access to my work machine at the moment where I have linked a
good post from Helen on how MGA works. But essentially it both records
co-exist until both the new record commits and then no other
transaction is interested in the old record and it gets flagged for
garbage collection, and then depending on the server model you are
using the GC thread runs or the poor guy who runs the next query that
hits that record gets lumped with the clean up.

Generally speaking, I do not encounter such a large range update
query, and certainly not where the user would actually be waiting in
front of the screen for something to happen. This sort of operation
would only happen in our system because of a large amount of polled
data, which is done by a service. Sorry, I can't comment on whether
the performance you are expecting is reasonable or not.

I would be interested in knowing four things.

Firstly, are there any triggers on this table, and if so what do they
do? Do they recursively call another trigger etc?

Secondly, I am not familiar with your system, but if you are doing a
batch delete, then a batch insert, then a batch update to test this,
there is possibly garbage, so does running a sweep first help?

Thirdly, if you have created an index on the PK field, drop the index.
The PK constraint will add one for you, and sometimes it confuses the
optimiser which will then try to use a very silly plan.

Fourthly, are there any other indexed fields or fields defined as
unique etc. The update will have to populate the index with the new
records as well as possibly garbage collect the old records out of there.

Hopefully it is something simple.

Adam