Subject | Re: Short or long transactions |
---|---|
Author | burmair |
Post date | 2008-09-04T13:16:38Z |
--- In firebird-support@yahoogroups.com, "Martijn Tonies"
<m.tonies@...> wrote:
a REALLY big spreadhsheet, with Firebird as the persistence layer.
Make a change in a single cell, and the effects ripple through the
entire spreadsheet. Is the unit of work a single cell update?
Perhaps first generation effects are a unit of work, then second
generation, and so on. There are other plausible groupings in our
multi-dimensional models, but it's not always easy to find the
boundaries. Maybe the entire update is a unit of work (this seems to
me the most sensible interpretation), but then the question becomes,
how many millions of DB updates can reasonably be performed in a
single transaction?
I guess my question really is, how much overhead is there in the setup
and teardown of a transaction, and will that overhead be affected by
any of the DB parameters that I can manipulate?
<m.tonies@...> wrote:
>What constitutes a unit of work? You could think of my application as
> ...In terms of performance, is it faster to do as much as possible in a
> > single transaction (or maybe a few "chunks"), or is it essentially the
> > same if each update is performed in its own transaction?...
>
> Use a transaction per unit of work. Period.
a REALLY big spreadhsheet, with Firebird as the persistence layer.
Make a change in a single cell, and the effects ripple through the
entire spreadsheet. Is the unit of work a single cell update?
Perhaps first generation effects are a unit of work, then second
generation, and so on. There are other plausible groupings in our
multi-dimensional models, but it's not always easy to find the
boundaries. Maybe the entire update is a unit of work (this seems to
me the most sensible interpretation), but then the question becomes,
how many millions of DB updates can reasonably be performed in a
single transaction?
I guess my question really is, how much overhead is there in the setup
and teardown of a transaction, and will that overhead be affected by
any of the DB parameters that I can manipulate?