Subject Re: [firebird-support] Re: Database corruption (again) or what is wrong with Firebird.
Author Alexandre Benson Smith
HKlemt wrote:
>> Hi Holger !
>>
>> We had talked about this system on the last Firebird Developers Day in
>> Brazil, and I thought it interesting and simple to implement from what
>> we had talked on that night, but I had not looked into the real
>> implementation until today :)
>>
>
> will you be there again this year? i will show the newest
> implementation in my session there.
>

Yep.. I will be there, I will do a talk about GSTAT.

>
>> I know it's just the skeleton of a replication system...
>>
>
> yes, at lest the things i published in my article which is form 2006
>
>
>> No handle of computed fields
>>
> yes, but easy to be added by restriction on the rdb$relation_fields in
> the initlog procedure
>

Yes.

>
>> Always updates full record, not just the changed columns
>>
> missing in 2006 version, but not in 2008 version ;-)
>

:)

The problem I got is that with tables with too much fields, is easy to
overun the 64kb limit for the procedure code

Two solutions:
1.) one trigger for each field
2.) put a hard limit of fields that could be on the same trigger,
example 30 fields per trigger, and create multiples triggers.

Since I had few cases like this, I choose to split the problematic ones
by hand.

>
>> No way to avoid replication of columns maintained by triggers
>>
> could be added for example by using specific domains or column names
> for these columns and ignore them in the same as described above. The
> other way would be that the triggers ignore things done by the
> replicat user, so no conflict when transfering this columns data also
> as you described.
>
>

Good solution, I did changed manually the auto-generated triggers to not
log those columns, not hard to do on the first time, but boring to
maintain :(

>> I did not realize how you handle the following situation:
>> First Replication Action: Insert a new record ID#1 (fails)
>> Second replication Action: Changes field "foo" on record ID#1 (that did
>> not exists since the insert fails)
>>
>> Does the second action get logged as "done" ? If so, this will lead to
>> problems of mismatch data between databases (the same applies to delete
>> instead of update).
>>
>
> simple question: why should first action fail? if it is a data problem
> (pk already exists or so), you have a problem in your application, it
> has no influence on the replication.
>

It could fail by a lot of reasons, be it data rules (unique constraints,
something like, not allow two costumers with the same name) of business
rules, hard to predict (something like: does not accept orders from
costumers that has bills not paid, in the moment the user put the new
order on a given system, the system allow it because it has no pending
payments, but in the replication time, it could be different), those are
just 2 examples of each case, but there is more complex rules that could
lead to fail on the moment of the replication.

> basically we do all transfer all operations in transactions and if any
> of them fail, no later operation is allowed to be transfered before
> the first runs fine. This will never work (as you expect).
> When you think about a replicated database, your concept must allow to
> transfer all records and basic database rules must already create
> exceptions when it is not sure that these record can be written on
> another database. This is also the reason why i use the offset for the
> ID Generator in all databases.
>

Since I was implementing it on a legacy system that was not thought off
to be distributed I have some more problems with PK's conflicts, I did
implemented some work arounds to avoid it.

In my case there is around 20 distributed database and a master one that
consolidates all data, replication are off line, the replication
application collects the replication data, makes a data packet on with
the delta, compress it and send using FTP to an "always on" server, the
master location pulls the "always on" server, get the various delta data
packets from each satellite database, apply it to the master database,
the same goes the other way around.

It's working ok for 3 months...

> when the replicator transfer process finds new records in the log, it
> tries to transfer all the records in one transaction. If everything
> works, the insert in the destination database (one insert for each
> operation, but all in one transaction) is commited and due to the
> trogger on the log table, executed using execute statement.
>
>
>> Perhaps was just a misunderstanding on my part, since I did not read
>>
> all
>
>> the code carefully, if so, just forgive me.
>>
>
> the code i published is not always easy to read ;-)
>

It's not hard to read, I just not read it all carefully :)
>
>> I think those are my doubts for a first look on your system.
>>
>> Thanks for sharing it with us !
>>
>
> i like ot do so :-)
> cu
>
>
> Holger
> www.ibexpert.com
>

see you !

--
Alexandre Benson Smith
Development
THOR Software e Comercial Ltda
Santo Andre - Sao Paulo - Brazil
www.thorsoftware.com.br