Subject | Re: [IBO] OT Program Design |
---|---|
Author | Geoff Worboys |
Post date | 2001-07-31T07:36:44Z |
> I worked previously with a company that implemented NT TerminalI understand. Australia gets pretty dark when it comes to permanent
> Server I loved it especially after a power failure. You would
> log on and it were as if you were never gone.
> However here in Darkest Africa :-) most of us(clients and me)
> cannot afford leased lines or permanent connections.
connections in some areas as well. Indeed I may be about to find out
just how dark, since we are hoping to move out of the city soon.
> If I consider option 2 do you have examples/demos thatI dont think that you will find anything as sophisticated as what you
> demonstrate how to cache data temporarily. How to work
> with an unconnected dataset that contains data. Also
> any design tips will be appreciated. I have never done
> something like this so I dont know what components to
> use. I suspect that there is most probably such
> a demo in ibo already.
will need in the samples. I do know Jason has built some additional
capabilities inside IBO4 to support broken connections, but I am not
sure how complete this is, neither am I sure if it will react exactly
as you require (or even if it is intended to reach this level of
sophistication).
I have not done anything specifically along these lines either. So
much depends on the particular application that it is difficult to
offer any good advice.
You may find it worthwhile looking at Jason's replication stuff
released with IBO4 that may give you some pointers on where to start.
My original line of thought when I read your option2 was that you
would setup a system where records are stored in a local copy of the
database in separate tables (possibly even in a separate database) or
are flagged as inserted/changed. When a batch is read you would
connect and sync with the HO database - updating your own "cache"
accordingly. It is not that much different to using a replication
system BUT you get to give the user some tighter control over when the
changes are synchronised and provide faster feedback than waiting for
some sort of once-per-day replication.
Sort of like a two stage commit. Stage one commits to the local
database, stage two synchronises and commits to the HO. If the HO is
unavailable the second stage can be delayed until it becomes
available.
As I suggested earlier I see this as more flexible because you can
also setup a timer which automatically looks for changes from the HO
database (perhaps including messages to the branch) allowing the
branches to stay in much tighter synchronisation. Problems, if they
occur, should be picked up much earlier - hopefully while it is still
possible for the staff to remember what the items were about.
So much depends on how complicated the database is, how much data is
likely to be involved, how fast/regular changes come in, what is the
likelyhood of conflicted updates from different branches and how
stable you expect the database structure to be.
Geoff Worboys
Telesis Computing