Subject | RE: Replicator pseudo-code |
---|---|
Author | Phil Horst |
Post date | 2004-01-21T18:53:32Z |
I have written several versions of data replication processes, so I have
spent a fair amount of time working through these issues.
In general, your approach looks reasonable.
I found, in my situation, that transmitting data changes on a field
level was too slow. That is to say, the volume of data generated took
too long to process (within the constraints of my environment). I ended
up sending an entire record if anything changed on the record. This
reduced processing time from hours to minutes (or secs). However, this
was written in a different era of equipment. Perhaps in today's
environment, this would become immaterial.
Have you thought about situations where the same record (primary key) is
changed on two different systems during the same transfer cycle? Perhaps
this can't happen in your environment, but it is something that needs to
be considered in a general case. My approach was to create an exception
report and ask somebody to look at it.
In my final version of the process I wrote, I created a feed-back loop
that confirmed that a change/addition/deletion had indeed been applied
at the destination system.I found that I needed this because nothing
works perfectly in the real world. Until a verification record was
received, I could "re-send" the change information as needed. In my
case, I could not use commitment control because my servers were
asynchronously connected - I actually sent the change data as email
attachments. Maybe for you commitment control is feasible.
I also had to deal with the making sure my replication process knew how
to deal with metadata changes (i.e. new fields). Again, this may not
apply in your situation.
Some thoughts that may or may not make sense for your situation. Good
luck.
Phil Horst
spent a fair amount of time working through these issues.
In general, your approach looks reasonable.
I found, in my situation, that transmitting data changes on a field
level was too slow. That is to say, the volume of data generated took
too long to process (within the constraints of my environment). I ended
up sending an entire record if anything changed on the record. This
reduced processing time from hours to minutes (or secs). However, this
was written in a different era of equipment. Perhaps in today's
environment, this would become immaterial.
Have you thought about situations where the same record (primary key) is
changed on two different systems during the same transfer cycle? Perhaps
this can't happen in your environment, but it is something that needs to
be considered in a general case. My approach was to create an exception
report and ask somebody to look at it.
In my final version of the process I wrote, I created a feed-back loop
that confirmed that a change/addition/deletion had indeed been applied
at the destination system.I found that I needed this because nothing
works perfectly in the real world. Until a verification record was
received, I could "re-send" the change information as needed. In my
case, I could not use commitment control because my servers were
asynchronously connected - I actually sent the change data as email
attachments. Maybe for you commitment control is feasible.
I also had to deal with the making sure my replication process knew how
to deal with metadata changes (i.e. new fields). Again, this may not
apply in your situation.
Some thoughts that may or may not make sense for your situation. Good
luck.
Phil Horst