Subject | Re: is "UPDATE or INSERT" the best way to replicate a read-only local table ever? |
---|---|
Author | emb_blaster |
Post date | 2011-07-12T13:20:53Z |
> Talking from my own experience building the hot-standby/redo thingy inHi Thomas,
> IB LogManager, the greater check on the primary key value of the log
> table is not reliable.
>
> Imagine the following scenario:
>
> - Transaction 1 changes something and produces a log entry with ID = 1,
> but the transaction is not yet committed
> - Transaction 2 changes something and procudes a log entry with ID = 2
> and the transaction gets committed.
>
> The redo/replay processs sees only committed log data, thus ID = 2 but
> not ID = 1 yet. So, it will replicate ID = 2. The next time, a check on
> ID with 2 > something won't see even a committed log entry with ID = 1,
> thus the log entry with ID = 1 gets never replicated.
>
> In IB LogManager, the IBLMRedo_cmd utility maintains a processed log
> records table with a maintenance mechanism in place to clean up the
> process log table etc ...
>
>
> --
> With regards,
>
> Thomas Steinmaurer
Thanks for drop in...
That problem will only occur if the process of update start just after transaction 2 commit and before transaction 1 commit.
I think that this is a thing that we will not have in this specific scenario. But it put me to think more about it, and I am not thinking in another way to do that without a complete journaling system. I really don't want to add that complexity to this simply program. But deleting all records from table and insert the updated ones also seems creating a huge not necessary traffic. But I just can't think any way to contour this. :(