Subject | Re: [Firebird-Architect] Superserver |
---|---|
Author | Erik LaBianca |
Post date | 2006-10-29T19:30:19Z |
Jim Starkey wrote:
limited understanding, database clustering is typically done to provide
scalability, redundancy and uptime, with performance being a secondary
concern and a natural outgrowth of scalability. Usually it's done with
throwaway machines (1u rackmount servers) and expensive (redundant and
hot swappable) shared storage. A superserver system will never be able
to continue serving requests through a hardware failure, whereas classic
operating on a cluster could do so.
I know for me personally I'd feel a lot better about the reliability of
my database system if I could periodically perform a 'pull the plug'
test on the system and test if everything continued operating correctly.
I guess its really a matter of project goals. To my knowledge most of
the large RDBMS products (Oracle and MSSQL at the minimum) support
clustered operation, so its clearly something people are interested in
doing. On one hand, it's a long ways from the goal of a small, very fast
and very lightweight database engine. On the other hand, firebird is
(uniquely?) positioned to go where no opensource database has gone so far.
My personal opinion is that if clustering isn't in the near term future,
deprecating classic in favor of an in-process superserver like embedded
on windows would simplify life for most people.
--erik
> Although the gating issues is #1, the critical issue is #4 -- is it aFrom a performance standpoint thats unequivocally true. However, in my
> good idea. I think the answer is no. Assuming the goal is to provide
> data management services to the cluster (as opposed to, say, writing a
> PhD thesis), then a single process DBMS using single serial write commit
> technology will "beat the pants" off a cluster DBMS that has to write a
> page to disk to transfer it from process A to process B. In all
> seriousness, the best database architecture for a cluster is a separate
> large memory SMP box with processors to keep the memory full and the
> disk(s) busy.
limited understanding, database clustering is typically done to provide
scalability, redundancy and uptime, with performance being a secondary
concern and a natural outgrowth of scalability. Usually it's done with
throwaway machines (1u rackmount servers) and expensive (redundant and
hot swappable) shared storage. A superserver system will never be able
to continue serving requests through a hardware failure, whereas classic
operating on a cluster could do so.
I know for me personally I'd feel a lot better about the reliability of
my database system if I could periodically perform a 'pull the plug'
test on the system and test if everything continued operating correctly.
I guess its really a matter of project goals. To my knowledge most of
the large RDBMS products (Oracle and MSSQL at the minimum) support
clustered operation, so its clearly something people are interested in
doing. On one hand, it's a long ways from the goal of a small, very fast
and very lightweight database engine. On the other hand, firebird is
(uniquely?) positioned to go where no opensource database has gone so far.
My personal opinion is that if clustering isn't in the near term future,
deprecating classic in favor of an in-process superserver like embedded
on windows would simplify life for most people.
--erik