Subject | Re: [Firebird-Architect] Superserver |
---|---|
Author | Alex Peshkov |
Post date | 2006-10-31T08:48:01Z |
Erik LaBianca:
after a hardware failure.
Asymmetric cluster - two computers (master and slave) both have firebird
installed, master normally running it, slave not. Slave's partition
mounted by master (using - you are absolutely right - NFS), and database
is mirrored to that partition. Cluster IP is assigned to master.
Slave regularly checks, is master alive or not. In case when it decides
(this is the most problematic part of the whole solution) that master is
dead, it assigns cluster's IP to itself, starts firebird and is ready to
answer queries with no committed transactions lost. Certainly, clients
should reattach to the cluster in case of failure (clearing ARP cache),
but for our purposes this is not a problem.
> Alex Peshkov wrote:I don't mean load balancing - only want to continue serving requests
>>
>> Erik LaBianca:
>>> A superserver system will never be able
>>> to continue serving requests through a hardware failure, whereas classic
>>> operating on a cluster could do so.
>>>
>> Sorry, Erik, but it does. A number of clients of our company use SS
>> fail-resistant clusters for last 4 years. Even without expensive shared
>> storage.
>>
>
> How? Shadow databases on NFS? With that architecture you'll still have
> to delegate a master database to receive all write traffic and
> communicate it to the clients... I've certainly not seen any such
> solution documented...
>
after a hardware failure.
Asymmetric cluster - two computers (master and slave) both have firebird
installed, master normally running it, slave not. Slave's partition
mounted by master (using - you are absolutely right - NFS), and database
is mirrored to that partition. Cluster IP is assigned to master.
Slave regularly checks, is master alive or not. In case when it decides
(this is the most problematic part of the whole solution) that master is
dead, it assigns cluster's IP to itself, starts firebird and is ready to
answer queries with no committed transactions lost. Certainly, clients
should reattach to the cluster in case of failure (clearing ARP cache),
but for our purposes this is not a problem.