Subject | Re: [Firebird-Architect] WAL and JOURNAL |
---|---|
Author | Jim Starkey |
Post date | 2004-01-21T17:13:48Z |
Olivier Mascia wrote:
the mirrors so careful write is preserved. It is straightforward to
preserve the write order until a cycle creaps in. The cache manager
knows to detect and handle cycles on a fetch request. It isn't obvious
to me how to handle the problem for many batched page changes, though
I'm sure this has been thought through. Without preserving the careful
write order, there is no guarentee that the mirrored database is usable
unless the snapshot of changed pages is frozen immediately after a cache
flush and that the mirror is presumed inconsistent during the period in
which the page changes are applied. This would be particularly messy is
the mirrored version was used for readonly access during the update
process. Perhaps Nickolay's clever side shadow would help, but the
shock to a long running readonly transation when a couple of hours page
changed suddenly appeared would be interesting. Another tricky problem
is how to handle the case where a batch update failed, which is critical
case when mirroring is used for disaster recovery. I suppose a log of
page pre-images from the update would do the trick, but it adds another
piece of complexity and further complicates the problem of using the
mirror for queries, reports, etc.
There are other ways address the problem. Netfrastructure has a client
whose application does four or five node cascading replication. The
replication is at application level but eased by a number of features,
including:
1. Multi-table triggers to maintain a local change log table
2. A sequence (generator) mechanism that produces network unique values
3. A server to server service based on a exchange of serialized
object clusters
4. An internal scheduler to kick off change propogation
The way they use the system is a primary server that handles web traffic
(all clients are browsers), the second (belt) and third (suspenders)
servers are hot backups, and the fourth server is forty miles way to
protect against any less than a 10 megaton blast. The belt and
suspender servers, however, also do batch reporting and accounting
wrapups that get replicated upstream as well.
In the long run, I think features that address these kind of issues are
more interesting that lower level physical database replication. In the
short term, they're out of reach, and a stopper in the disaster recovery
liability is a very good thing.
Jim Starkey
Netfrastructure, Inc.
978 526-1376
>So we're clear about the fact that those slave would be guaranteed toThe tricky part will to determine the order in which pages are sent to
>be read-only beasts : only accepting read-only transactions or another
>mechanism to guarantee those slaves are read-only ?
>
>Are we sure those uncommitted changes eventually physically present on
>the slave pages could not interact badly with running queries ?
>
>If I start a "long-running" query in a default read-only transaction
>(concurrency) on a slave, and new page updates come in from master
>with new committed transactions, how this will work ? I should need to
>see the old records that might well not exists anymore in the recent
>pages. I'm not sure I'm not lost in the thing, but I fear some issue
>here.
>
the mirrors so careful write is preserved. It is straightforward to
preserve the write order until a cycle creaps in. The cache manager
knows to detect and handle cycles on a fetch request. It isn't obvious
to me how to handle the problem for many batched page changes, though
I'm sure this has been thought through. Without preserving the careful
write order, there is no guarentee that the mirrored database is usable
unless the snapshot of changed pages is frozen immediately after a cache
flush and that the mirror is presumed inconsistent during the period in
which the page changes are applied. This would be particularly messy is
the mirrored version was used for readonly access during the update
process. Perhaps Nickolay's clever side shadow would help, but the
shock to a long running readonly transation when a couple of hours page
changed suddenly appeared would be interesting. Another tricky problem
is how to handle the case where a batch update failed, which is critical
case when mirroring is used for disaster recovery. I suppose a log of
page pre-images from the update would do the trick, but it adds another
piece of complexity and further complicates the problem of using the
mirror for queries, reports, etc.
There are other ways address the problem. Netfrastructure has a client
whose application does four or five node cascading replication. The
replication is at application level but eased by a number of features,
including:
1. Multi-table triggers to maintain a local change log table
2. A sequence (generator) mechanism that produces network unique values
3. A server to server service based on a exchange of serialized
object clusters
4. An internal scheduler to kick off change propogation
The way they use the system is a primary server that handles web traffic
(all clients are browsers), the second (belt) and third (suspenders)
servers are hot backups, and the fourth server is forty miles way to
protect against any less than a 10 megaton blast. The belt and
suspender servers, however, also do batch reporting and accounting
wrapups that get replicated upstream as well.
In the long run, I think features that address these kind of issues are
more interesting that lower level physical database replication. In the
short term, they're out of reach, and a stopper in the disaster recovery
liability is a very good thing.
>--
>
>
Jim Starkey
Netfrastructure, Inc.
978 526-1376