Subject Re: Should updates be this slow?
Author inoffensive_2006
--- In firebird-support@yahoogroups.com, "Alan McDonald" <alan@...> wrote:
>
> > Hi Folks:
> >
> > I'm running Superserver 2.0.1 in Windows XP Media on a
> > low end AMD X2, a 3800+, and a 100GB IDE drive. My app is
> > running on the system.
> >
> > This isn't the system I'd choose for the best performance,
> > but it's probably representative of what my customers will
> > have.
> >
> > The contact table, describing persons and organizations,
> > is a fat thing, but a SELECT of the database key and a
> > varchar large enough to hold a person's first and last name
> > will plow through at a little over 7000 records a second, with
> > overhead for my C++ code to allocate a class instance and put
> > its address in a STL vector.
> >
> > The vector is sorted and an index is calculated for each
> > record. The UPDATE sets a new value for a BIGINT index
> > in the contact table.
> >
> > This operation is grinding along at about 330 records per
> > second.
> >
> > I've altered the the index for these records, to disable
> > the index during the updates, and enable after the updates.
> >
> > Does this sound like the performance I should expect, or
> > am I doing something wrong?
> >
> > I'm ignorant of the ways of SQL and Firebird, so I'm
> > interested in suggestions.
> >
> > Thanks
> > Larry
>
> What, precisely, are you trying to do here?
> you're retrieving all records in a table, calculating a field on the
client
> side and updating that field value as you move through the records
(or after
> retrieving all records)?
> You are somehow using this calculated value as an index of some kind
> relevant to your application and/or business model? (STL vector? what's
> that?)
> Sorry, I'm lost as to what this is all trying to achieve but it sounds
> clumsy whatever it is.
> Alan
>
Thanks Alan:

It may be clumsy.

Small picture - should an UPDATE of a BIGINT in a table
take 20 times longer than a SELECT on a BIGINT key and a
VARCHAR in the same table?

This is the big picture.

I'm developing a new piece of code to keep an array of
small records that consist of keys and the sort order of a
contact record in a Windows memory mapped file.

I've decided to start working on the code that will read
these fields, and I want to generate the BIGINT sort order
index for each record to prepare the table for reading.

The production code will be generating these indexes as
the records are inserted into the database, but a different
piece of code, that doesn't generate these indexes, is
building the table now.

It takes a little more than a second to read all 8000 of
the records, and 24 seconds to update them.

I'm just curious about the speed of the updates. And I'm
wondering if I'm doing something that makes them that slow.

I am interested in any suggestions.

I appreciate your swift response to my question. It's
almost 4:00 in the morning here, so I'll take a look at
this thread in a few hours.

Thanks
Larry