Subject Re: Trigger or Stored Procedure dilemma
Author andrew_s_vaz
> 1. They could deadlock, if it's already possible (without the
triggers) for two concurrent
> transactions to successfully update two or more of the seven tables,
for the same PK, without
> conflicting. Example: if you have "people", "addresses", and
"phone_numbers", and it's possible for
> two concurrent transactions to modify the same person's information
(by only touching
> either "addresses" or "phone_numbers" but leaving "people" alone),
then your trigger would cause a
> problem: the two transactions would both insert/update that the
"person" (in my example) needed to
> be replicated, and deadlock.
>
> 2. Doesn't matter; triggers and stored procedures have the same byte
limit on the amount of code
> you can stuff into them; creating a separate SP just polutes the
database namespace, which is
> already too easy to do; an extra procedure call seems unlikely to
speed things up. It wouldn't
> change your concurrency situation at all.
>
> 3. I think so, yes. It's a lot of tables, but you'll reduce your
concurrency risks.
> Another choice would be to always insert into the replication
table, never update; drop the PK
> on the replication table, and use DISTINCT when deciding what to
pull out of it, and delete the
> replication records as you handle them.
>
> This assumes that the "master/detail" relationship is what I'm
guessing it is, from your wording;
> I'm a little worried that a "delete" against a detail record could
mark the master record for
> deletion in the next replication, which doesn't seem appropriate --
is it the detail table's FK to
> the master that is put into the replication table, or that table's
PK? And if it's that table's PK,
> how do you differentiate between operations on different tables, to
know what the three PK fields
> mean?
>
> So, either:
> - keep a control table for the "master" part of a master/detail, and
move it and its detail rows
> over each time, replacing anything in the replication database for
that master (involves marking
> rows as "bad" on the destination side, updating/inserting and
marking as "good" any data coming
> from the source, then deleting any rows still marked as bad), or
> - keep control tables for each table, and replicate the exact
changes occuring, never touching more
> records than you need to.
>
> -Philip

Thanks for your Input Philip. I see that my design is wrong and I'm
going to put more thinking into something better as you advised.

Thanks again for your time.
Andrew