Subject | Re: Replicating user data |
---|---|
Author | Adam |
Post date | 2005-06-27T02:30:49Z |
--- In firebird-support@yahoogroups.com, "Lauchlan Mackinnon"
<lmackinnon@i...> wrote:
to provide auditing information then you should take heed of David's
advice on the matter. My solution would certainly not be adequate for
that, however developing a model such as David's would have been
overkill for our task.
Lauchlan, I don't think David's solution introduced any additional
complexities to the "dirty update" problem. Whenever data is taken
out of the live DB and put back in later, you need to deal with that
problem. It is a business rule rather than a technical consideration.
If the synchronisation is automated, then the decision is something
like:
* Live DB always wins
* Offsite DB always wins
* Oldest change always wins
* Newest change always wins
* Some hybrid that attempts to merge changes
Every one of these except the last is trivial to implement.
Adam
<lmackinnon@i...> wrote:
> > The technical implementation of this is trivial. A one-way mergeinactivated
> > consists of selecting a record from one database, updating
> > records, and inserting new records into the second database. Atwo-way
> > merge is simply two one-way merges operating concurrently inopposite
> > directions.made)
>
> But how do you deal with cases where
>
> (i) the user is up to date with the current databases
> (ii) the user goes offline (say a local copy of the database is
> (iii) the user edits some rows offlinechanges
> (iv) meanwhile those rows are edited online in the main database
> (v) the user goes back online and wants to replicate his or her
>take
> I would have presumed you'd need a logic of deciding which changes
> precedence. For example, if the online changes take precedence,perhaps you
> have to come back to the user and say you can't commit changes torow 123456
> because someone has changed that row online. the new data is . . .what do
> you want to do?replication
>
> I think an n-tier framework would make supporting this kind of
> logic much easier. What is your strategy for handling these cases?In our particular model this is not an issue, obviously if you need
>
to provide auditing information then you should take heed of David's
advice on the matter. My solution would certainly not be adequate for
that, however developing a model such as David's would have been
overkill for our task.
Lauchlan, I don't think David's solution introduced any additional
complexities to the "dirty update" problem. Whenever data is taken
out of the live DB and put back in later, you need to deal with that
problem. It is a business rule rather than a technical consideration.
If the synchronisation is automated, then the decision is something
like:
* Live DB always wins
* Offsite DB always wins
* Oldest change always wins
* Newest change always wins
* Some hybrid that attempts to merge changes
Every one of these except the last is trivial to implement.
Adam