Subject | Re: [firebird-support] Trigger or Stored Procedure dilemma |
---|---|
Author | unordained |
Post date | 2008-09-10T15:33Z |
---------- Original Message -----------
From: "andrew_s_vaz" <andrew_s_vaz@...>
1. They could deadlock, if it's already possible (without the triggers) for two concurrent
transactions to successfully update two or more of the seven tables, for the same PK, without
conflicting. Example: if you have "people", "addresses", and "phone_numbers", and it's possible for
two concurrent transactions to modify the same person's information (by only touching
either "addresses" or "phone_numbers" but leaving "people" alone), then your trigger would cause a
problem: the two transactions would both insert/update that the "person" (in my example) needed to
be replicated, and deadlock.
2. Doesn't matter; triggers and stored procedures have the same byte limit on the amount of code
you can stuff into them; creating a separate SP just polutes the database namespace, which is
already too easy to do; an extra procedure call seems unlikely to speed things up. It wouldn't
change your concurrency situation at all.
3. I think so, yes. It's a lot of tables, but you'll reduce your concurrency risks.
Another choice would be to always insert into the replication table, never update; drop the PK
on the replication table, and use DISTINCT when deciding what to pull out of it, and delete the
replication records as you handle them.
This assumes that the "master/detail" relationship is what I'm guessing it is, from your wording;
I'm a little worried that a "delete" against a detail record could mark the master record for
deletion in the next replication, which doesn't seem appropriate -- is it the detail table's FK to
the master that is put into the replication table, or that table's PK? And if it's that table's PK,
how do you differentiate between operations on different tables, to know what the three PK fields
mean?
So, either:
- keep a control table for the "master" part of a master/detail, and move it and its detail rows
over each time, replacing anything in the replication database for that master (involves marking
rows as "bad" on the destination side, updating/inserting and marking as "good" any data coming
from the source, then deleting any rows still marked as bad), or
- keep control tables for each table, and replicate the exact changes occuring, never touching more
records than you need to.
-Philip
From: "andrew_s_vaz" <andrew_s_vaz@...>
> Hi all,------- End of Original Message -------
>
> Sorry if this seems like a rookie kind of question, but I tend to get
> edgy when I have to do something that can put my neck at risk. :-)
>
> I'm populating a table to control the replication of seven tables and
> I would like to know if I am doing something that will end up kicking
> me back in the privates. Initial tests seems ok, but with a production
> database being heavily used (several thousand updates/inserts a day)
> things can get in a wrong direction very quickly...
>
> The scenario:
> FB: 1.5.5.4926
> OS: Windows xp sp3
>
> Tables involved in replication: 7 (total tables in the DB 175)
>
> These tables are master-detail...etc connected and use the same 3
> columns for the primary keys that I use to control the replication per
> se (I replicate all the 7 tables if necessary).
>
> Each Table calls a Trigger, being essentially the same and updating
> the same control table, only updating one of the operations columns
> RPLATNUPD, RPLATNINS, RPLATNDEL (for Updates, Insert and deletes):
>
> CREATE TRIGGER IAT_UPD_REPL FOR THE_TABLE
> active before update position 0
> as
> declare variable totalregs integer;
> begin
> IF( USER <> 'REPLIC' ) THEN
> BEGIN
>
> select count(IDAT) from RPL_CONTROL
> where idat = new.idat and
> idfil = new.idfil and
> ano = new.ano
> into :totalregs;
>
> if (totalregs > 0 ) then
> begin
> update RPL_CONTROL set RPLATNUPD = 1
> where idat = new.idat and
> idfil = new.idfil and
> ano = new.ano;
> end
> else
> begin
> insert into RPL_CONTROL (IDAT,
> IDFIL,
> ANO,
> RPLATNUPD)
> values (new.idat,
> new.idfil,
> new.ano,
> 1);
> end
> end
> end
> ^
>
> This inserts a row if needed to the table rpl_at that has this DDL:
>
> CREATE TABLE RPL_CONTROL (
> IDAT INTEGER NOT NULL CHECK (value > 0),
> IDFIL SMALLINT NOT NULL CHECK (value > 0),
> ANO SMALLINT NOT NULL CHECK (value > 0),
> RPLATNUPD SMALLINT DEFAULT 0,
> RPLATNINS SMALLINT DEFAULT 0,
> RPLATNDEL SMALLINT DEFAULT 0 );
>
> ALTER TABLE RPL_CONTROL ADD CONSTRAINT PK_RPL_AT PRIMARY KEY (IDAT,
> IDFIL, ANO);
>
> Now, my questions:
>
> 1 - As all the tables are inserting/updating the same control table,
> could the triggers end up deadlocking?
>
> 2- Would it be better to use a Stored Procedure to do this and use the
> triggers to call the SP?
>
> 3- Should I do a separate control for each table?
>
> Thanks for the input.
> Andrew
1. They could deadlock, if it's already possible (without the triggers) for two concurrent
transactions to successfully update two or more of the seven tables, for the same PK, without
conflicting. Example: if you have "people", "addresses", and "phone_numbers", and it's possible for
two concurrent transactions to modify the same person's information (by only touching
either "addresses" or "phone_numbers" but leaving "people" alone), then your trigger would cause a
problem: the two transactions would both insert/update that the "person" (in my example) needed to
be replicated, and deadlock.
2. Doesn't matter; triggers and stored procedures have the same byte limit on the amount of code
you can stuff into them; creating a separate SP just polutes the database namespace, which is
already too easy to do; an extra procedure call seems unlikely to speed things up. It wouldn't
change your concurrency situation at all.
3. I think so, yes. It's a lot of tables, but you'll reduce your concurrency risks.
Another choice would be to always insert into the replication table, never update; drop the PK
on the replication table, and use DISTINCT when deciding what to pull out of it, and delete the
replication records as you handle them.
This assumes that the "master/detail" relationship is what I'm guessing it is, from your wording;
I'm a little worried that a "delete" against a detail record could mark the master record for
deletion in the next replication, which doesn't seem appropriate -- is it the detail table's FK to
the master that is put into the replication table, or that table's PK? And if it's that table's PK,
how do you differentiate between operations on different tables, to know what the three PK fields
mean?
So, either:
- keep a control table for the "master" part of a master/detail, and move it and its detail rows
over each time, replacing anything in the replication database for that master (involves marking
rows as "bad" on the destination side, updating/inserting and marking as "good" any data coming
from the source, then deleting any rows still marked as bad), or
- keep control tables for each table, and replicate the exact changes occuring, never touching more
records than you need to.
-Philip