Subject | RE: [firebird-support] Re: How to improve update performance with millions records? |
---|---|
Author | Svein Erling Tysvær |
Post date | 2012-06-12T06:28:04Z |
> >Hi, I'm jimmySounds OK, but what's more important is the gap mentioned above - it doesn't help if the update commit every 200 rows if there is one or more other, concurrent transactions, that runs for a long time without committing (well, transactions that are read only AND read committed are OK, but other combinations are not).
>
> Hi Jimmy!
>
> > I have a problem, please help me!
> > I have a table with 3,000,000 rows record, every row update 1 to 5 times in one day.
> > The speed of update statement become slowly, about 30
> > records/second, but insert speed above 1000 records/second.
> > Is my usage be bad or not?
> > what should i do?
>
> I don't know whether your usage is bad or not. What kind of indexes do
> you have and how do you update? What about transactions, do you have a
> noticeable gap between oldest (active) transaction and next
> transaction? I don't know whether it is still relevant (it is a very
> old article), but in some cases I think rdb$db_key can be useful for
> updates: http://ibexpert.net/ibe/index.php?n=Doc.TheMysteryOfRDBDBKEY
>
> HTH,
> Set
>
> Thanks your help!
> My table have a primary key with integer, when update 200 rows then
> commit trans. Th
And take up Thomas offer, he will notice if there's something wrong with the output of gstat.
Set