Subject | Re: Short or long transactions |
---|---|
Author | burmair |
Post date | 2008-09-04T15:36:15Z |
--- In firebird-support@yahoogroups.com, "Martijn Tonies"
<m.tonies@...> wrote:
work exceeds the capacity of the DB to handle it as a single
transaction? Even if atomicity can't be achieved, there are numerous
other advantages to using a relational DB for persistence, so it seems
like the answer would be to use multiple transactions as effectively
as possible. Hence my question: a smaller number of larger
transactions, or a larger number of smaller transactions?
<m.tonies@...> wrote:
>possible in a
>
> > > ...In terms of performance, is it faster to do as much as
> > > > single transaction (or maybe a few "chunks"), or is itessentially the
> > > > same if each update is performed in its own transaction?...I agree. My question is, what do you do when the size of that unit of
> > >
> > > Use a transaction per unit of work. Period.
> >
> > What constitutes a unit of work? You could think of my application as
> > a REALLY big spreadhsheet, with Firebird as the persistence layer.
> > Make a change in a single cell, and the effects ripple through the
> > entire spreadsheet. Is the unit of work a single cell update?
> > Perhaps first generation effects are a unit of work, then second
> > generation, and so on. There are other plausible groupings in our
> > multi-dimensional models, but it's not always easy to find the
> > boundaries. Maybe the entire update is a unit of work (this seems to
> > me the most sensible interpretation), but then the question becomes,
> > how many millions of DB updates can reasonably be performed in a
> > single transaction?
> >
> > I guess my question really is, how much overhead is there in the setup
> > and teardown of a transaction, and will that overhead be affected by
> > any of the DB parameters that I can manipulate?
>
> A unit of work is what you want to have saved in the database so
> that no information is lost between the different saves. You should
> be able to go from one consistent state of saved data to another
> by performing calculations/adding new data/whatever.
work exceeds the capacity of the DB to handle it as a single
transaction? Even if atomicity can't be achieved, there are numerous
other advantages to using a relational DB for persistence, so it seems
like the answer would be to use multiple transactions as effectively
as possible. Hence my question: a smaller number of larger
transactions, or a larger number of smaller transactions?