Subject | Opinions on this... |
---|---|
Author | Jason Wharton |
Post date | 2001-03-17T16:24:16Z |
> > I can put everything inside a unique transaction, but it is not a goodand
> > thing to have all the invoice lost if, at the commit time, IB reports a
> > deadlock (maybe some other guy did another invoice using the same item
> > so updating the stock = conflict).at
> >
> > There is much more complexity than I'm exposing here (actually some
> > triggers also updates other tables that deal with quarrels, etc...)but
> > the moment I would like to hear your comments on how to make the aboveAn interactive user application should NEVER use lock wait. The application
> > process in a way to minimize deadlocks.
>
> If it's just that two or more users update the stock for the same item
> you could use the WaitOnLock property of the transaction. If a deadlock
> occurs the transaction will wait and then retry to post. If the deadlock
> was just caused by this problem, the deadlocks will be avoided easily.
should be designed such that they have fluid processing (uninterrupted
interactions) with the data. I believe this means that the order would have
to be entered in more than a single logical transaction and that to cancel
the order doesn't mean a rollback of everything but rather a modification to
the header indicating that the order never became effective.
> On the other hand you could store the stock information in a one-to-manyAn order should result in "insert only" so that deadlocks can be avoided as
> relationship to the items table, so you have no update on the items
> table and a real nice overview when which amount of which item got out
> and when it got in.
> But in this case you have to sum the entries to get the current stock
> AND it will be tough to prevent the stock being below zero....
well. Allocations from stock result in an insert to a special table for
reserved items. Then, when the order is shipped or becomes permanent the
reserved records are "rolled up" to impact the stock directly. When another
user is attempting to reserve stock it is their responsibility to check the
amount in the stock table as well as the number of items reserved in that
table. By calculating the balance a trigger or stored procedure could
enforce a rule that the stock minus reserved never drop below zero. This
would be multi-user safe too ( if done in a stored proc or trigger only ) as
I rather doubt that IB would let them happen at exactly the same time. To do
it in two steps from the client would allow the problem though. I think
ServerAutoCommit may also have to play a part in this but I am not totally
sure.
This is a model I am working with quite a bit lately as I am approaching a
new application. There are items of correspondence that are just too complex
to hold within a single transaction. Plus, when working with web apps or
multi-tier apps things need to be broken up into pieces where transactions
at the database level are concerned. Thus, I am exploring various ways to
process work items using a transaction for each little step and then the
application needs to know how to deal with the work items being in a
partially completed state. It feels like two steps forward, one step back...
But, a framework to accomplish this is what I am working on now.
For example, updates are done away with almost entirely. All information
going into the system is designed to end up being inserted somewhere. Thus,
the entire database becomes a versioned view of the data. Updates only
happen when corrections are being made and records are being flagged as
"terminated" with an effective date of when they are no longer applicable.
For example, if a persons address changes, rather than updating information
in a table directly a new address record is created in an address table and
a record is added to another table linking the person to the address with
the effective date. The record linking the former address is updated with
the termination date of the address. As a result, I know what address was
valid for which dates and the history remains in the system.
I believe that this very granular style of database will become more and
more popular over time. The entire database is just a living archive that is
primarily inserted into. I look forward to some extensions that will
accommodate this better too. OODBMS are stepping in the right direction for
this I think.
Then, for the more complex search side of things, I plan to use replicated
views of the pertinent data to be searched on. This is where the service
applications come into play. Their job is to maintain separate database(s)
which have all the "as of now", statistical, summary, full text, etc. that
becomes the target of queries shown to the external world. This is because
searching among the granular database would be next to impossible to
accomplish with acceptable performance.
This model also allows good separation of load for scalability. Internal
users do their thing to the database and it is then replicated to external
databases which are hit from the external world. If the web server is
getting pounded the internal users are still unaffected.
I suppose I could go on but breakfast is ready now and I am hungry...
Regards,
Jason Wharton
CPS - Mesa AZ
http://www.ibobjects.com