Subject Re: [Firebird-Architect] One way to scale Firebird put it on memory virtual disk or ramfs
Author Jim Starkey
Let me clarify. If a storage manager process is "voted off the island"
by other nodes for some reason, that process can't rejoin. It can,
however, be restarted on the existing archive and re-enter the chorus
(they may have dropped this term), in which case the archive will be
synchronized with other storage nodes before it is fully operational.
So the archive can re-enter, but the storage manager process that had
been managing the archive cannot.


On 1/22/13 4:00 PM, Dalton Calford wrote:
>
> Hi Ann,
>
> Version 1 allows storage managers to rejoin the database after a partition
> > has been resolved.
> >
> >
> Glad to hear.
>
> > That answer is correct. When the storage managers are partitioned, only
> > those storage managers in the larger partition allow transactions to
> > commit. When the partition is resolved, the previously inactive storage
> > managers synchronize their archives with the continuously running
> storage
> > managers and become part of the solution again.
> >
> >
> So, the other members act to heal any missing atom pages on the newly
> reconnected segment. good.
>
> > No, but I doubt that you are asking that two halves of a partition
> continue
> > to operate without communication and then rejoin.
>
> Actually, I am asking that - part of our design was that we could have a
> complete fracture and have every fragment of the network continue working.
> We are under five-nines design criteria and the fracture of our ip based
> admin network must not impact the operations of our managed ss7 network.
> Each table in the database is designed to operate within windows of data
> and the client must understand if an answer is authoritative or not. For
> example, we have a service where you can make phone calls from pre-paid
> phone cards. If our network fragments, the switch makes certain
> decisions based upon the information it has - based upon default rules.
> All commits go into local caches that are then pulled when the network
> heals. This allows for the system to continue while limiting risk.
> Although each switch may have access to a local copy of all the tables,
> the data in those tables is just a sub-set or window and every record is
> stamped with a unique server/database/transaction identifier.
>
> It is funny but the window concept was originally designed to deal
> with the
> max table size limit in interbase and evolved into a totally different
> purpose over time,
>
> We have daemons/bots that deal with different elements and it is all still
> running on firebird 1.5 and will be for at least another year until that
> system is retired and the new firebird 2.5 systems come to the forefront.
>
> >
> > Commits are not allowed in separate segments.
> >
> >
> This is where our designs differ - all segments can become temporally the
> primary and then resume back into a secondary role.
>
> best regards
>
> Dalton
>
> [Non-text portions of this message have been removed]
>
>



[Non-text portions of this message have been removed]