Subject | Suspended Transactions (was Multithreading) |
---|---|
Author | Adam Clarke |
Post date | 2004-10-08T09:24:14Z |
Roman Rokytskyy wrote:
need to collect information over multiple requests before the data is
complete.
Currently the process goes.
request 1: Collect some data - stash it somewhere.
request 2 - n: Perhaps restore data from previous requests, remember to
stash it all again.
request n + 1: Restore all the previous data, concoct it into SQL
statements and attempt to commit it.
I would rather be putting the data into the database across requests (it
needs to be there later anyhow) and just wait until the end of the
process to call the commit and/or rollback depending on what is
required, an added bonus would be having access to the NEW.COLUMN_NAME
values across multiple database connects.
Some problems to consider...
1. How do you know when a "suspended" transaction is never going to be
"resumed". If you don't handle this then there will be a growing list of
orphaned transactions gumming up the record versioning works and
presumably wasting resources. I guess a timeout would be required, with
an automatic rollback issued if it was hit.
2. Given that in my web app situation you would likely be connecting as
a single user, what mechanism prevents other db connections from
hijacking the transaction. Do you just trust everything? Do you have the
client create a secret "id" that will be needed to get the transaction
back? Do you have the server provide a secret "id" and use that?
Probably heaps more, but I'm no engine hacker.
Cheers
--
Adam Clarke
Strategic Data Pty Ltd
> Would I would like to see, is tha possibility to use a transactionI agree. Such a feature would be very useful for web applications which
> handle outside the connection context. E.g. I start transaction in one
> connection, the suspend it, then resume it in another transaction (in
> other words support for XA transactions over multiple connections).
need to collect information over multiple requests before the data is
complete.
Currently the process goes.
request 1: Collect some data - stash it somewhere.
request 2 - n: Perhaps restore data from previous requests, remember to
stash it all again.
request n + 1: Restore all the previous data, concoct it into SQL
statements and attempt to commit it.
I would rather be putting the data into the database across requests (it
needs to be there later anyhow) and just wait until the end of the
process to call the commit and/or rollback depending on what is
required, an added bonus would be having access to the NEW.COLUMN_NAME
values across multiple database connects.
Some problems to consider...
1. How do you know when a "suspended" transaction is never going to be
"resumed". If you don't handle this then there will be a growing list of
orphaned transactions gumming up the record versioning works and
presumably wasting resources. I guess a timeout would be required, with
an automatic rollback issued if it was hit.
2. Given that in my web app situation you would likely be connecting as
a single user, what mechanism prevents other db connections from
hijacking the transaction. Do you just trust everything? Do you have the
client create a secret "id" that will be needed to get the transaction
back? Do you have the server provide a secret "id" and use that?
Probably heaps more, but I'm no engine hacker.
Cheers
--
Adam Clarke
Strategic Data Pty Ltd