Subject | Re: [Firebird-Architect] Group Commits |
---|---|
Author | Jim Starkey |
Post date | 2004-11-08T19:01:04Z |
Dmitry Yemanov wrote:
("the difference between theory and practice is that in theory there is
no difference"). So I'll argue against the idea and concede at the
end. But I will make you put up a good argument before I cave.
processing through the point where the database had a high degree of
certainly that the transaction would commit, returning success to the
user, then, using something like group commit, batch a tip update.
Assuming this is what you meant (and I'm extrapolating), I don't like it
at all to two reasons. First, I don't consider anything committed that
isn't "safe on oxide". No matter how likely it may be that a
transaction will commit, there's always the possibility that somebody
will trip over the power cord or a disgruntled ex-employee will take out
his aggressions on a helpless server box, or some other unavoidable
natural disaster. While I'd like to be the one walking away from the
ATM machine (that's automatic teller machine machine) with cash in my
pocket and no record of the withdrawal on the bank's disk, I find my
deeper sympathy lies with stable society. Second, and this is
unavoidable, if commit is deferred after return to the host program,
there is a possibility that it could start a transaction overlapping
with the one it considered committed, leading to utterly bizzare results.
There are other ways around this problem. The most popular is a
sequential log of transaction updates that could be written with forced
write that could be reprocessed and re-applied at database restart
time. This has much going for it, though simplicity is one of them.
Netfrastructure is designed around this idea, though I've never gotten
around to actually implementing it, so I'm not going to claim I know
where the elephant traps are located. But the idea of performing
transaction writes to an intermediate sequential file that is
asynchronously applied to the database file by a writer thread is very
intriguing.
[Non-text portions of this message have been removed]
>>But it's an interesting philosophical question as to whether you offerYeah, you're probably right. From time to time pragmatism trumps theory
>>individual transactions the option of faster service at the expense of
>>other transactions, and an option, if always used, would result in lower
>>performance for everyone.
>>
>>
>
>isc_dpb_no_garbage_collect, if always used, also tends to result in lower
>overall performance for everyone, but this is required for some usage
>patterns, including gbak. I think that this option would allow to serve
>occasional critical transactions more effectively under heavy load (a kind
>of OOB packets?)
>
>
("the difference between theory and practice is that in theory there is
no difference"). So I'll argue against the idea and concede at the
end. But I will make you put up a good argument before I cave.
>As a slightly offtopic question - did you think about the defered commitsIf I remember correctly, you were talking about running the commit
>(in the meaning I presented in my first message)? It might be a good
>solution for those preferring to get extra speed at the expense of decreased
>(but still controlled) reliability?
>
>
>
processing through the point where the database had a high degree of
certainly that the transaction would commit, returning success to the
user, then, using something like group commit, batch a tip update.
Assuming this is what you meant (and I'm extrapolating), I don't like it
at all to two reasons. First, I don't consider anything committed that
isn't "safe on oxide". No matter how likely it may be that a
transaction will commit, there's always the possibility that somebody
will trip over the power cord or a disgruntled ex-employee will take out
his aggressions on a helpless server box, or some other unavoidable
natural disaster. While I'd like to be the one walking away from the
ATM machine (that's automatic teller machine machine) with cash in my
pocket and no record of the withdrawal on the bank's disk, I find my
deeper sympathy lies with stable society. Second, and this is
unavoidable, if commit is deferred after return to the host program,
there is a possibility that it could start a transaction overlapping
with the one it considered committed, leading to utterly bizzare results.
There are other ways around this problem. The most popular is a
sequential log of transaction updates that could be written with forced
write that could be reprocessed and re-applied at database restart
time. This has much going for it, though simplicity is one of them.
Netfrastructure is designed around this idea, though I've never gotten
around to actually implementing it, so I'm not going to claim I know
where the elephant traps are located. But the idea of performing
transaction writes to an intermediate sequential file that is
asynchronously applied to the database file by a writer thread is very
intriguing.
[Non-text portions of this message have been removed]