Subject | Re: [Firebird-Architect] Superserver |
---|---|
Author | Erik LaBianca |
Post date | 2006-10-29T04:15:11Z |
Ann W. Harrison wrote:
in support of classic. Currently my company produces a piece of
shrinkwrap software using embedded superserver for windows, and also
uses firebird as the backend database (classic on linux) for some
multi-user applications in the office. We'll soon extend our shrinkwrap
application to support a multi-user mode with an out-of-process firebird
server.
Firebird is great because it gives us the platform neutraility and
flexibility to do these things. That being said, in my opinion, classic
server is currently something of a red-headed stepchild. I see very few
posts regarding it in the in the -devel list and although everyone seems
to recognize it as the optimal system for use on multiprocessor unix
platforms, the packaging is a bit sketchy and seems to lag behind. It
doesn't seem to me as if many folks really understand the way it works,
either.
In my (mostly unqualified) opinion, one way to keep classic in the game
to make it cluster aware. If I understand the way classic works now,
'all' that is needed is the ability to connect to a shared lock manager
from multiple machines. To make it truly cluster resilient, I suppose
converting the lock manager a distributed one would be necessary as
well. To my knowledge no other open source database is able to do this,
and having the functionality implemented would probably revitalize
classic server a lot and give firebird a leg-up feature on the competition.
The other reason I use classic is embedded mode on unix. Once you figure
out all the permissions and such, it is far faster at doing things like
database backups when you are able to eliminate the tcp/ip overhead
entirely.
What I'd really like to know is the following
1. Am I oversimplifying the problem of making classic cluster aware? If
not it's conceivably something I could see devoting some resources to
given an agreed upon design.
2. Why are people so jazzed about superserver? As I understand it it was
largely implemented to work around weak ipc on windows. Why push
superserver on unix then, since classic embedded mode works great, and
classic has some great potential scalability? Is the lack of a shared
page cache really that big of a performance hit in the days of 64 bit
machines with gobs of ram?
--erik
> As for the larger issue of Firebird's market, it appearsApologies for necromancing this old thread, but I thought I'd chime in
> that MySQL is moving away from small OEM's in favor of
> enterprise customers. Firebird is already a better choice
> for software resellers technically and financially. That's
> a huge global market that well fill exceptionally well.
> The tiny-footprint databases Jim mentioned are either
> single user or scale badly to multiple users, so they're
> not so attractive to anyone who develops an application
> designed to scale from single users to networks. Our
> various architectures, though a maintenance challenge
> and a problem in implementing some features, give us
> flexibility that other systems don't offer.
>
in support of classic. Currently my company produces a piece of
shrinkwrap software using embedded superserver for windows, and also
uses firebird as the backend database (classic on linux) for some
multi-user applications in the office. We'll soon extend our shrinkwrap
application to support a multi-user mode with an out-of-process firebird
server.
Firebird is great because it gives us the platform neutraility and
flexibility to do these things. That being said, in my opinion, classic
server is currently something of a red-headed stepchild. I see very few
posts regarding it in the in the -devel list and although everyone seems
to recognize it as the optimal system for use on multiprocessor unix
platforms, the packaging is a bit sketchy and seems to lag behind. It
doesn't seem to me as if many folks really understand the way it works,
either.
In my (mostly unqualified) opinion, one way to keep classic in the game
to make it cluster aware. If I understand the way classic works now,
'all' that is needed is the ability to connect to a shared lock manager
from multiple machines. To make it truly cluster resilient, I suppose
converting the lock manager a distributed one would be necessary as
well. To my knowledge no other open source database is able to do this,
and having the functionality implemented would probably revitalize
classic server a lot and give firebird a leg-up feature on the competition.
The other reason I use classic is embedded mode on unix. Once you figure
out all the permissions and such, it is far faster at doing things like
database backups when you are able to eliminate the tcp/ip overhead
entirely.
What I'd really like to know is the following
1. Am I oversimplifying the problem of making classic cluster aware? If
not it's conceivably something I could see devoting some resources to
given an agreed upon design.
2. Why are people so jazzed about superserver? As I understand it it was
largely implemented to work around weak ipc on windows. Why push
superserver on unix then, since classic embedded mode works great, and
classic has some great potential scalability? Is the lack of a shared
page cache really that big of a performance hit in the days of 64 bit
machines with gobs of ram?
--erik