Subject | Re: [Firebird-Architect] One way to scale Firebird put it on memory virtual disk or ramfs |
---|---|
Author | Ann Harrison |
Post date | 2013-01-22T20:24:18Z |
On Tue, Jan 22, 2013 at 11:48 AM, Thomas Steinmaurer <ts@...>wrote:
and consistent even in the event of a partition, though the implementation
in the first release is less robust that we had hoped. The full
implementation involves "coteries" of storage managers.
For those not familiar with NuoDB, it has two types of processing systems:
storage managers whose sole job is to read and write data, and transaction
engines that interact with clients. Changes made by transaction engines
are persisted by storage managers. When a transaction engine needs data
that is not available in memory - its own and that of the other transaction
engines - it asks a storage manager to read the data.
Transaction engines are not concerned with partitions, except in the case
when they are separated from all active storage managers. In that case,
update transactions stall or fail at commit. The clients of an isolated
transaction engine can switch to connect to another transaction engine that
has access to active storage managers.
When the set of storage managers is partitioned, only one subset continues.
The others become inactive until they are reconnected. The design is that
the DBA defines some number of non-disjoint sets of storage managers called
"coteries." After a partition, only complete coteries remain active. Since
all coteries overlap, there's no way to have two complete coteries after a
partition. However, the design of coteries is tricky and really requires
an intelligent graphic design tool, which is not yet built. Moreover,
coteries are interesting when a database has more than five or six storage
engines, which is not the target of version 1. The backup plan is that the
partition with a majority of the storage managers survives. In the case of
an even number of storage managers, one is declared primary and it is the
tie-breaker. I'm not sure that was implemented either. However, the DBA
can specify a minimum number of storage managers that must be available for
a commit to succeed. Setting that number to a majority makes the database
partition tolerant.
"But," you say, "the database is less available when some number of machine
can no longer serve the database." Which is my initial question about
"fully solving." The database remains available and consistent in the
event of a partition. Consistency is an absolute - either consistent or
not. Availability can be a continuum or an absolute. NuoDB meets the goal
of absolute availability, but degrades on the continuum.
they arrive. However, the client layer looks just like any other MVCC
database. Reads are consistent. Read/write conflicts do not exist.
Write/write conflicts are detected and rejected. Calls from a JDBC client
to the transaction engine behave like all JDBC calls and are free to block
whenever they want.
Cheers,
Ann
[Non-text portions of this message have been removed]
>"Fully solving"? The design of NuoDB allows it to continue to be available
>
> Hope you don't mind two questions on NuoDB:
>
> * Is NuoDB fully solving the CAP theorem for distributed environments?
> Or e.g. is it eventually consistent when reading data?
>
and consistent even in the event of a partition, though the implementation
in the first release is less robust that we had hoped. The full
implementation involves "coteries" of storage managers.
For those not familiar with NuoDB, it has two types of processing systems:
storage managers whose sole job is to read and write data, and transaction
engines that interact with clients. Changes made by transaction engines
are persisted by storage managers. When a transaction engine needs data
that is not available in memory - its own and that of the other transaction
engines - it asks a storage manager to read the data.
Transaction engines are not concerned with partitions, except in the case
when they are separated from all active storage managers. In that case,
update transactions stall or fail at commit. The clients of an isolated
transaction engine can switch to connect to another transaction engine that
has access to active storage managers.
When the set of storage managers is partitioned, only one subset continues.
The others become inactive until they are reconnected. The design is that
the DBA defines some number of non-disjoint sets of storage managers called
"coteries." After a partition, only complete coteries remain active. Since
all coteries overlap, there's no way to have two complete coteries after a
partition. However, the design of coteries is tricky and really requires
an intelligent graphic design tool, which is not yet built. Moreover,
coteries are interesting when a database has more than five or six storage
engines, which is not the target of version 1. The backup plan is that the
partition with a majority of the storage managers survives. In the case of
an even number of storage managers, one is declared primary and it is the
tie-breaker. I'm not sure that was implemented either. However, the DBA
can specify a minimum number of storage managers that must be available for
a commit to succeed. Setting that number to a majority makes the database
partition tolerant.
"But," you say, "the database is less available when some number of machine
can no longer serve the database." Which is my initial question about
"fully solving." The database remains available and consistent in the
event of a partition. Consistency is an absolute - either consistent or
not. Availability can be a continuum or an absolute. NuoDB meets the goal
of absolute availability, but degrades on the continuum.
> * How does the asynchronous replication approach (if I got that right?)is shuffled to transaction engines as needed and changes are applied as
> in NuoDB with the nature of synchronous JDBC calls, e.g. executeUpdate
> blocks until it's finished etc.
>
> Asynchronous replication is the motto of the underpinnings of NuoDB - data
they arrive. However, the client layer looks just like any other MVCC
database. Reads are consistent. Read/write conflicts do not exist.
Write/write conflicts are detected and rejected. Calls from a JDBC client
to the transaction engine behave like all JDBC calls and are free to block
whenever they want.
Cheers,
Ann
[Non-text portions of this message have been removed]