Subject | Re: [IB-Architect] Classic vs. superserver (long, very very long) |
---|---|
Author | John Bellardo |
Post date | 2002-10-16T07:03:17Z |
Jim,
I'm not sure if you are referring to an email from Mark or myself, but
I'll answer.
I'm not sure if you are referring to an email from Mark or myself, but
I'll answer.
On Sunday, October 13, 2002, at 01:44 PM, Jim Starkey wrote:
> [...]
> Mark expressed a preference for a multi-threaded/multi-process
> architecture. I've deleted my first four responses. This
> time I'll just ask a few questions.
>
> Mark, how does a client make connection with the server?
In the general case, through a "load balancer" (for lack of a better
term). This process would bind to 3050 and determine how to route
connections based on some policy set (all connections to database X
goes to server Y, or all connections from client Z goes to A, ...). As
best I can determine all our major platforms support the ability to
send sockets between processes, so this "load balancer" has a constant
(close to single context switch) overhead.
> Only
> one process can bind to 3050 (why did I pick that number?).
Actually, via fork() multiple processes can be listening on a single
socket. It is just non-deterministic (from the processes POV, at
least) which one will actually "hear" the connection request.
> In
> classic, inetd or xinetd waits on 3050 then forks off a single
> client server.
The "load balancer" could be configured to do this given a particular
site policy.
> In classic
I assume you mean SS, and not classic.
> and other multi-client servers the
> server itself binds and waits on 3050. Neither of these would
> work with the hybrid system. Do you have a design for a
> server manager to redirect clients to a designated process?
> How much extention to the remote protocol will you have do?
There should be no extension to the remote protocol required.
> How do you make a close-connection-and-retry-another schema
> backwards compatible?
It is not, and that approach is undesirable because it adds significant
overhead.
> What is the additional overhead of another
> complete round trip on short transactions?
Depends on the design, but what I proposed above has close to a single
context switch overhead.
>
> The engine currently uses the page locks issued by the cache
> manager to synchronize access to pages within a process knowing
> that a thread switch can't happen while a page is in use. When
> you relax this to permit parallel thread execution, you're going
> to need a different type of synchronization for threads within
> the process. Turning classic's external page locks into
> internal buffer locks works just fine in a multi-threaded single
> process server. But how do you handle both types of locks with
> rewrite vast quantities of extremely tricky code, introducing
> deadlocks between the two locking systems, and making the
> conditionalization much, much worse?
I'm not 100% sure. But I am sure that no matter what direction we
choose there will be some very tricky parts of the code that need to be
modified. Here are two different approaches I can think of off the top
of my head:
1. two level locks - Each process is responsible for allocating its
own locks internally to threads. But it must maintain an externally
visible (to other processes) lock with sufficient rights to cover all
of its inner-process locks.
2. single level locks - Every thread lock is visible and maintained in
the shared lock table. This probably requires more of a restructuring
of the lock table than (1).
(1) has the added benefit of an inherent "exclusive access" mode when
there is only one process accessing a database. In that case all locks
can be issued/resolved with IPC.
>
> Doesn't a hybrid multi-threaded/multi-process architecture
> combine the worst characteristics of class and superserver
> while simultaneous losing the best?
The depends on what you consider the "best" and "worst"
characteristics. I would argue that it is less about absolute bests
and worsts, but more about tradeoffs. Those tradeoffs are going to be
different for different customers, although I would expect to see only
a few tradeoff classes. Consider a workload that consists primarily of
inserts. There is no question that the reduced IPC costs in a single
process model (even a one process-per-database model) is the way to go.
On the other hand each process only has a finite amount of address
space available to it (typically 1-2 GB). On systems configured with
large page caches this may create some hard tradeoffs (more clients vs.
less cache). And with memory being so cheap those systems aren't as
rare as they used to be. If the workload was primarily reads, and the
extra IPC overhead for the occasional writes was acceptable, a
multi-process configuration can handle more simultaneous clients.
A multi-process model also has more robustness when faced with crashes
(UDF or otherwise).
> Superserver is (in theory)
> more efficient because it can use cheaper, in-process, non-AST
> generating locks and move pages between database attachments
> without a page write, context switch, page read cycle. This
> all gets lost in the multi-process case.
>
> Mark, what is the difference between multi-threaded classic
> and superserver?
I am assuming you are referring to single-process multi-threaded
classic? In that case there is no user perceived difference.
multi-threaded classic would have the same advantages that SS has
today. It is mainly a thought process distinction. The SS route (and
plan of attack) seems to be more focused on finding and fixing the
existing lock problems with SS (including moving from a single lock to
more fine grained locking) while the CS approach suggests more effort
is placed in removing the shared data (for example moving most current
globals into per-attachment context) and hence the need for large
numbers of locks in the first place. Of course, with either approach,
there is the need to incorporate the other.
-John