Subject Re: [firebird-support] Re: is "UPDATE or INSERT" the best way to replicate a read-only local table ever?
Author Thomas Steinmaurer
>> Yes, you just add a record to a little log every time something changes, regardless of whether it is a delete, update or insert.
>
> This is alternative:
> In a simple case of one-way sync, table myChg on server should be: myChg(idChange, table, id, op).
> For each table "MyTable" there is trigger.
> CREATE TRIGGER MyTrigger FOR<MyTable> AFTER INSERT OR UPDATE OR DELETE AS
> declare op varchar(1);
> declare mID bigInt;
> begin
> op = 'U';
> if (DELETING) then begin
> op = 'D';
> mID = old.id;
> end
> else begin
> op = 'U';
> mID = new.ID;
> end
> INSERT INTO MyChg(idChange, table, id, op )
>> VALUES (GEN_ID(Chg_GEN, 1),<MyTable>, mID, op);
> end
> In the manner of ETL, for each table one need to filter out changes (idChange> idChg0) and forwarded to the client. In particular, separate op = "D" part of the table myChanges to. Suitable tool, fbExport, will give (with S switch) *.FBX file that after the transfer to the client, we can apply it with the same tools (I switch) to the table at the destination.
> Export:
> fbExport -S -D<database> -U user -P password -F<out file>.fbx -V TableName -Q "r where exists (select 1 from myChanges mc where mc.table =<tableName> and mc.idChange> idChg0 and r.id = mc.id)"
> - zip all out files ...
> - accept *.zip file, unzip and
> Import (with option for Update or Insert !):
> fbExport -I -D<database> -U -P -F<out file>.fbx -V TableName
>
> In Delphi, I run fbExport.exe in batch with "cmd /C ..." for each table separately.
> It is clear that (on the server) needs to be some AppServer for transfer. On the client, for accept, also. I use Midware components from Francois Piette but there are a wide range of tools for this purpose. Also, there are wide range of scenarios for mastering idChg0 and sync process. For example, the server periodically exports changes and leave the *. zip file to a familiar place where clients will find it.

Talking from my own experience building the hot-standby/redo thingy in
IB LogManager, the greater check on the primary key value of the log
table is not reliable.

Imagine the following scenario:

- Transaction 1 changes something and produces a log entry with ID = 1,
but the transaction is not yet committed
- Transaction 2 changes something and procudes a log entry with ID = 2
and the transaction gets committed.

The redo/replay processs sees only committed log data, thus ID = 2 but
not ID = 1 yet. So, it will replicate ID = 2. The next time, a check on
ID with 2 > something won't see even a committed log entry with ID = 1,
thus the log entry with ID = 1 gets never replicated.

In IB LogManager, the IBLMRedo_cmd utility maintains a processed log
records table with a maintenance mechanism in place to clean up the
process log table etc ...


--
With regards,

Thomas Steinmaurer
Upscene Productions
http://www.upscene.com
http://blog.upscene.com/thomas/

Download LogManager Series, FB TraceManager today!
Continuous Database Monitoring Solutions supporting
Firebird, InterBase, Advantage Database, MS SQL Server
and NexusDB!