Subject | Re: [firebird-support] Re: Short or long transactions |
---|---|
Author | Timothy Madden |
Post date | 2008-09-04T13:34:41Z |
On Thu, Sep 4, 2008 at 4:16 PM, burmair <YahooSpam@...> wrote:
Should your application be interrupted (by a bug, a crash of some kind or
a power failure, whatever ...) you would not want half of the cells updated
and half old. What kind of app are you doing that requires millions of
inter-dependent updates when a single cell changes ? Maybe that is not
a good design to begin with.
I do not know about transaction overhead, or how many updates can be
pushed into one, but you should not worry too much about it, until you
can actually observe performance penalties. If you use too many
transactions, on the other hand, instead of grouping more statements in
a single transaction, you might observe penalties because of the
forced writes to the disc. Transactions need to be durable, that is every
time you commit you get a disc access. Make sure you do that only
as often as it is necessary.
Hope that helps,
Timothy Madden
> --- In firebird-support@yahoogroups.com, "Martijn Tonies"It looks obvious to me that you need to do all the updates in a transaction.
> <m.tonies@...> wrote:
>>
>> ...In terms of performance, is it faster to do as much as possible in a
>
>> > single transaction (or maybe a few "chunks"), or is it essentially the
>> > same if each update is performed in its own transaction?...
>>
>> Use a transaction per unit of work. Period.
>
> What constitutes a unit of work? You could think of my application as
> a REALLY big spreadhsheet, with Firebird as the persistence layer.
> Make a change in a single cell, and the effects ripple through the
> entire spreadsheet. Is the unit of work a single cell update?
> Perhaps first generation effects are a unit of work, then second
> generation, and so on. There are other plausible groupings in our
> multi-dimensional models, but it's not always easy to find the
> boundaries. Maybe the entire update is a unit of work (this seems to
> me the most sensible interpretation), but then the question becomes,
> how many millions of DB updates can reasonably be performed in a
> single transaction?
>
> I guess my question really is, how much overhead is there in the setup
> and teardown of a transaction, and will that overhead be affected by
> any of the DB parameters that I can manipulate?
Should your application be interrupted (by a bug, a crash of some kind or
a power failure, whatever ...) you would not want half of the cells updated
and half old. What kind of app are you doing that requires millions of
inter-dependent updates when a single cell changes ? Maybe that is not
a good design to begin with.
I do not know about transaction overhead, or how many updates can be
pushed into one, but you should not worry too much about it, until you
can actually observe performance penalties. If you use too many
transactions, on the other hand, instead of grouping more statements in
a single transaction, you might observe penalties because of the
forced writes to the disc. Transactions need to be durable, that is every
time you commit you get a disc access. Make sure you do that only
as often as it is necessary.
Hope that helps,
Timothy Madden