Subject Re: [firebird-support] Re: Replicating user data
Author Lauchlan Mackinnon
> In our particular model this is not an issue,

Right, it isn't always. For example, I have an app where the users are in
different offices, and all their data has to get replicated back to the
central office. All the data is disjoint, so the replication issue is
relatively straightforward.

> obviously if you need
> to provide auditing information then you should take heed of David's
> advice on the matter.

Sure, it sounded like good advice.

> Lauchlan, I don't think David's solution introduced any additional
> complexities to the "dirty update" problem. Whenever data is taken
> out of the live DB and put back in later, you need to deal with that
> problem.

> It is a business rule rather than a technical consideration.

Right. That was my question - how do you deal with this business rule, where
does it go?

> If the synchronisation is automated, then the decision is something
> like:
>
> * Live DB always wins
> * Offsite DB always wins
> * Oldest change always wins
> * Newest change always wins
> * Some hybrid that attempts to merge changes
>
> Every one of these except the last is trivial to implement.

ok, I guess that's where we arrive at. It comes down to nuts and bolts, and
this is where I am interested. I'll learn how to do the trivial stuff! :)

Suppose you choose LiveDB wins, and set out to do the replication. By this
rule, some of the records can be replicated to the live DB and some cant.
Suppose for the sake of discussion, we keep track of the offline 'delta'
with three fields, NeedsUpdateToServer (boolean), ActionType (the latter can
be 'update', 'insert' or 'delete'), and LastModfiedTimeStamp. And yep I know
you could use just the datetime stamp and a record of the time last
replicated to server to achieve much the same result.

ok, so given this we can get the delta by select . . . where
NeedsUpdateToServer = true, returning the delta as a dataset.

now, how are you applying this delta back to the server? I'm a newbie with
FB, and there may be better ways, but you could loop over the delta and for
each row, apply an appropriate update/insert SQL command against the server.
But if you just do this, you are not applying the rule. So you'd have to
read the row on the live server first (if it's an update) and check a
timestamp for the last time it was updated and compare that with the row in
the delta. You'd then conditionally apply the replication for that row
depending on the result.

Then if it failed, you'd have to download the row again and replace the
modified row in the delta with the original, and alert the user that the
update could not be commited to the server and the value had reverted.

Is that the kind of replication process you meant/envisioned?

Anyway, having spelt it out, I guess I agree with you that it does not need
require an n-tier framework if you follow a simple rule and is
straightforward to implement. Although an n-tier framework would probably
still have benefits/advantages, particularly more control of the delta,
storing versioning information for the changes, and more flexibility for
changing the rules later or implementing more complex rules

Lauchlan Mackinnon