Subject | Re: [IB-Architect] Synchronisation between classic architecture server processes |
---|---|
Author | Jan Mikkelsen |
Post date | 2000-03-24T23:22:23Z |
Ann Harrison <harrison@...> wrote:
a different version of Interbase. And I think I might have misunderstood
how the classic server is implemented.
First, what I meant by disruption:
I wasn't referring to runtime disruption, but install time disruption.
Consider the case of a user installing multiple related applications, each
with Interbase "embedded".
If Interbase is embedded in an application, there is a link between
application version and Interbase version. Applications vendors test their
application against a particular version of Interbase, and tend to get upset
if that changes. With an open source Interbase, this becomes even more
important because versions can be modified in arbitrary ways.
At the moment, there is a common client library and a common server
directory. If an application has installed (say) parts of IB5.6, and
another application wants to install parts of IB6.0, the application using
5.6 might break and the application vendor will be unwilling or unable to
support the user.
Running two IB server versions concurrently is likely to be problematic.
For single user applications using Interbase, there is no need to have a
separate Interbase server installation, or even attempt to share code with
other applications in any way. However, at least under 5.6, you are forced
to do so.
I'd like to see Interbase totally embedded in an application like this,
potentially even to the point of being statically linked with the
application executable.
As an aside, I saw some discussion somewhere about the word embedded. I
really like the term "embedded". Being embeddable was probably the single
most important feature in Interbase to us.
I hope that makes it more clear.
On my misunderstanding:
I assumed that connection to a classic server followed the same model as
connection to a superserver. The application process links with a shared
library which provides the API functions to communicate with a server
process on behalf of the application. The server process does all the real
work in a separate process context to the application.
In another thread, I see Jim Starkey write:
means "application process context". But surely under this model, there
would be no need to fire up server processes in response to TCP connections,
and I'd be curious about the distributed shared memory implementation being
used for locking. It really starts to look like the page shipping
architectures popular in some OODBMSs.
However, you said that the classic server is "90% shared library, but they
do require a
process context". It sounds like a classic server is some code to talk a
protocol over a virtual circuit and translate those into calls into a shared
library which does the real work. So, there should be nothing stopping an
application from linking directly with that shared library. If so, that's
great!
Or am I still missing something fundamental?
Regards,
Jan.
>At 11:23 AM 3/24/00 +1100, Jan Mikkelsen wrote:also
>
>>The reason for these particular questions is that it struck me that a
>>classic server could be turned into a linkable library. This would remove
>>the need for single user applications to start a separate process, and
>>avoids disrupting other IB applications on the system.normal
>
>I guess I'm confused. The individual database access programs (called INET
>servers) in the classic model are 90% shared library, but they do require a
>process context. Perhaps I'm being dim. Each connection starts a process
>so it controls it's own volatile data structures. I'm at a loss to see
>why that disrupts other IB applications on the system - except in the
>way that database users (and O/S processes) conflict with each other.Sorry, I'm not being clear. By disruption I mean applications depending on
a different version of Interbase. And I think I might have misunderstood
how the classic server is implemented.
First, what I meant by disruption:
I wasn't referring to runtime disruption, but install time disruption.
Consider the case of a user installing multiple related applications, each
with Interbase "embedded".
If Interbase is embedded in an application, there is a link between
application version and Interbase version. Applications vendors test their
application against a particular version of Interbase, and tend to get upset
if that changes. With an open source Interbase, this becomes even more
important because versions can be modified in arbitrary ways.
At the moment, there is a common client library and a common server
directory. If an application has installed (say) parts of IB5.6, and
another application wants to install parts of IB6.0, the application using
5.6 might break and the application vendor will be unwilling or unable to
support the user.
Running two IB server versions concurrently is likely to be problematic.
For single user applications using Interbase, there is no need to have a
separate Interbase server installation, or even attempt to share code with
other applications in any way. However, at least under 5.6, you are forced
to do so.
I'd like to see Interbase totally embedded in an application like this,
potentially even to the point of being statically linked with the
application executable.
As an aside, I saw some discussion somewhere about the word embedded. I
really like the term "embedded". Being embeddable was probably the single
most important feature in Interbase to us.
I hope that makes it more clear.
On my misunderstanding:
I assumed that connection to a classic server followed the same model as
connection to a superserver. The application process links with a shared
library which provides the API functions to communicate with a server
process on behalf of the application. The server process does all the real
work in a separate process context to the application.
In another thread, I see Jim Starkey write:
>Interbase "classic" is designed to run in the user processThis sounds like exactly what I want, assuming that "user process context"
>context. It is a shared library. It syncronizes access to
>pages with a lock manager.
means "application process context". But surely under this model, there
would be no need to fire up server processes in response to TCP connections,
and I'd be curious about the distributed shared memory implementation being
used for locking. It really starts to look like the page shipping
architectures popular in some OODBMSs.
However, you said that the classic server is "90% shared library, but they
do require a
process context". It sounds like a classic server is some code to talk a
protocol over a virtual circuit and translate those into calls into a shared
library which does the real work. So, there should be nothing stopping an
application from linking directly with that shared library. If so, that's
great!
Or am I still missing something fundamental?
Regards,
Jan.