Subject | Re: [Firebird-Architect] One way to scale Firebird put it on memory virtual disk or ramfs |
---|---|
Author | Dalton Calford |
Post date | 2013-01-22T21:00:36Z |
Hi Ann,
Version 1 allows storage managers to rejoin the database after a partition
reconnected segment. good.
complete fracture and have every fragment of the network continue working.
We are under five-nines design criteria and the fracture of our ip based
admin network must not impact the operations of our managed ss7 network.
Each table in the database is designed to operate within windows of data
and the client must understand if an answer is authoritative or not. For
example, we have a service where you can make phone calls from pre-paid
phone cards. If our network fragments, the switch makes certain
decisions based upon the information it has - based upon default rules.
All commits go into local caches that are then pulled when the network
heals. This allows for the system to continue while limiting risk.
Although each switch may have access to a local copy of all the tables,
the data in those tables is just a sub-set or window and every record is
stamped with a unique server/database/transaction identifier.
It is funny but the window concept was originally designed to deal with the
max table size limit in interbase and evolved into a totally different
purpose over time,
We have daemons/bots that deal with different elements and it is all still
running on firebird 1.5 and will be for at least another year until that
system is retired and the new firebird 2.5 systems come to the forefront.
primary and then resume back into a secondary role.
best regards
Dalton
[Non-text portions of this message have been removed]
Version 1 allows storage managers to rejoin the database after a partition
> has been resolved.Glad to hear.
>
>
> That answer is correct. When the storage managers are partitioned, onlySo, the other members act to heal any missing atom pages on the newly
> those storage managers in the larger partition allow transactions to
> commit. When the partition is resolved, the previously inactive storage
> managers synchronize their archives with the continuously running storage
> managers and become part of the solution again.
>
>
reconnected segment. good.
> No, but I doubt that you are asking that two halves of a partition continueActually, I am asking that - part of our design was that we could have a
> to operate without communication and then rejoin.
complete fracture and have every fragment of the network continue working.
We are under five-nines design criteria and the fracture of our ip based
admin network must not impact the operations of our managed ss7 network.
Each table in the database is designed to operate within windows of data
and the client must understand if an answer is authoritative or not. For
example, we have a service where you can make phone calls from pre-paid
phone cards. If our network fragments, the switch makes certain
decisions based upon the information it has - based upon default rules.
All commits go into local caches that are then pulled when the network
heals. This allows for the system to continue while limiting risk.
Although each switch may have access to a local copy of all the tables,
the data in those tables is just a sub-set or window and every record is
stamped with a unique server/database/transaction identifier.
It is funny but the window concept was originally designed to deal with the
max table size limit in interbase and evolved into a totally different
purpose over time,
We have daemons/bots that deal with different elements and it is all still
running on firebird 1.5 and will be for at least another year until that
system is retired and the new firebird 2.5 systems come to the forefront.
>This is where our designs differ - all segments can become temporally the
> Commits are not allowed in separate segments.
>
>
primary and then resume back into a secondary role.
best regards
Dalton
[Non-text portions of this message have been removed]