Subject | Re: [Firebird-Architect] (Near to) 0 down time? (with ease I hope...) |
---|---|
Author | Jim Starkey |
Post date | 2006-02-03T12:31:51Z |
Dimitry Sibiryakov wrote:
that it significantly adds to the effective page write time, but careful
write handles the case of the primary system crash case. A crash of a
remote shadow server would require that the shadow be rebuilt, but
that's to be expected because the primary server has to continue anyway.
If shadow propogation were delegated to a separate thread rather than
performed inline, the overhead would disappear provided the network
bandwidth was up to the effect disk write bandwidth.
The downside of shadowing is that like NBak, if the database get
corrupted, you have N equally corrupted copies.
of network bandwidth, allowing geographical distribution of replicants
in case of wide spread disaster. This may seem a little far fetched,
but replication to a machine in a distant bunker is accepted business
practice for mission critical databases, especially after 9/11. A
customer of mine uses a 5-way replication chain: A primary web server, a
secondard replicant doing batch reporting, two (depending on the phase
of the month) systems for hot failover (gross overkill), and a machine
40 miles away in an underground vault with two independent power feeds,
armed guards, and all other sorts of ridiculous crap.
>On 2 Feb 2006 at 11:08, m_theologos wrote:We used to shadow across NFS and we never had a problem. It is true
>
>
>
> I don't think so. Because you can't control network buffers
>shadowing to another computer either lead to slowdown (if FB will
>wait till the data is really written to the shadow) or broken shadow
>if the server crashed before sending data from buffers.
>
>
that it significantly adds to the effective page write time, but careful
write handles the case of the primary system crash case. A crash of a
remote shadow server would require that the shadow be rebuilt, but
that's to be expected because the primary server has to continue anyway.
If shadow propogation were delegated to a separate thread rather than
performed inline, the overhead would disappear provided the network
bandwidth was up to the effect disk write bandwidth.
The downside of shadowing is that like NBak, if the database get
corrupted, you have N equally corrupted copies.
> I think that currently there is only one real way to decreaseI think this is a much better approach, all in all. It makes better use
>downtime with FB (and IB) - replication. But neither method guarantee
>against data loss. When you use replication and server crashed you'll
>lose data that is not replicated yet. If you use shadowing you may
>lose whole database if the shadow become inconsistent (broken).
>
>
of network bandwidth, allowing geographical distribution of replicants
in case of wide spread disaster. This may seem a little far fetched,
but replication to a machine in a distant bunker is accepted business
practice for mission critical databases, especially after 9/11. A
customer of mine uses a 5-way replication chain: A primary web server, a
secondard replicant doing batch reporting, two (depending on the phase
of the month) systems for hot failover (gross overkill), and a machine
40 miles away in an underground vault with two independent power feeds,
armed guards, and all other sorts of ridiculous crap.