Subject | RE: [IB-Architect] Messaging API (theory) |
---|---|
Author | Jim Starkey |
Post date | 2000-05-17T21:16:20Z |
While we wait for Jason to straighten out his race conditions,
let me take a few minutes and explain why asynchronous processing
is hard stuff.
Operating systems generally provide either of two delivery
systems: signals and threads. Signals are a context free
asynchronous trap, enabled by a client program, and delivered
by some system event. A signal is almost pure state. A
signal handler knows it has been called, but not why. Unix
system generally support only two user controllable signal,
SIGUSR1 and SIGUR2, and often one or both is taken over by
some subsystem (for example, Interbase classic). The BSD
variant of Unix signals (folded into Posix) allows a number
of subsystems to multiplex on a single signal provided each
goes out of its way to be a good world citizen.
Among the many problems with signals is what the signal handler
can do with transcient information it uncovers (say, data associated
with an event). In general, it wants to stick it into some data
structure. But it's not that simple. Since the data structure
is shared between the signal handler and the normal state of the
program, there must be some synchronization. Most Unix systems
offer Sys-V semaphores for this sort of this, but they don't work
very well for this application. The problem is simple: what does
the signal handler do if it finds the data structure is locked.
It can't wait because it has preempted the code than can release
the semaphore. Scratch semaphores. The other alternative is
for the program to disable that signal before accessing the
shared structure. This is a little clunky, interferes with
multiplexed signals, but is the traditional Unix solution.
To make this more complex, depending on the Unix, certain system
and rtl calls cannot be called from signal state just to keep
you on your toes.
The existing InterBase event mechanism fineses the problem with
a design where a single asynchronous event can occur from an
event_que call, and the API requires the user to specify a
result buffer of the correct size (original and updated event
blocks are the same size). This is one reason that wildcarding
is difficult.
A much better mechanism for asynchonous process is threads. A
thread dedicated to listening for something to happen is much
simpler than a signal for two reasons. First, a thread can wait
for a data structure to become available. Second, operating system
designers smart enough to implement threads also design better
synchronization primitives than clunky Sys-V semaphores. Windows
supports a neat mechanisms called critical regions (one one thread
can enter a critical region) and pthreads supports mutexes (mutual
exclusions).
But, alas, not all systems support threads, and not all threading
packages work.
Obviously, any mechanism proposed for implementation in InterBase
must work in all environments.
The other problem to be solved is how a thread or signal handler
wakes up the main program waiting for a notification. Cutler
based operating systems (RSX, VMS, NT) use event flags that the
main line waits on and the handler set or clears. They work,
but are tricky to get right. Historically, as soon as Cutler goes
off to do something else, his successors add a simpler system
such HIBER (go to sleep) and WAKE (wake up). Since other things
can wake a process up prematurely, WAKE has to be assumed noisy.
The Unix guys tried and tried and tried to make something that worked
before pthreads showed up, without much to show for the effort. The
Sys-V philosophy was to interrupt "long" operations (socket i/o,
for example) everytime something happened (like any signal). This
lead to endless bugs where an unexpected signal kill a pending
i/o and the programmer didn't check the status correct. BSD tried
to correct this by automagically restarting the operations. Neither
worked particularly well (meaning lots of code all over to make anything
complex work). Again, the guys who designed pthreads built their
mutex mechanism to handle this situation gracefully.
So the problem of events isn't simply the design of a convenient
API, but designing a mechanism that can be implemented on all of
the current (and future) InterBase platforms.
The basic rule of safety is this: If you are planning to receive
something asynchronously, make sure you have someplace to put it.
The original event mechanism followed this rule. The messaging
system I proposed this morning does not. Implementing that messaging
system on a non-threading platform could be very, very tricky.
On uniprocessors one can design non-interlocked data structures
and access algorithms. On multi-processors all bets are off.
I guess the bottom line is something like this. If God had meant
computers to handle things asynchronously, he would have made
operating system designers smarter. Or something like that.
Jim Starkey
let me take a few minutes and explain why asynchronous processing
is hard stuff.
Operating systems generally provide either of two delivery
systems: signals and threads. Signals are a context free
asynchronous trap, enabled by a client program, and delivered
by some system event. A signal is almost pure state. A
signal handler knows it has been called, but not why. Unix
system generally support only two user controllable signal,
SIGUSR1 and SIGUR2, and often one or both is taken over by
some subsystem (for example, Interbase classic). The BSD
variant of Unix signals (folded into Posix) allows a number
of subsystems to multiplex on a single signal provided each
goes out of its way to be a good world citizen.
Among the many problems with signals is what the signal handler
can do with transcient information it uncovers (say, data associated
with an event). In general, it wants to stick it into some data
structure. But it's not that simple. Since the data structure
is shared between the signal handler and the normal state of the
program, there must be some synchronization. Most Unix systems
offer Sys-V semaphores for this sort of this, but they don't work
very well for this application. The problem is simple: what does
the signal handler do if it finds the data structure is locked.
It can't wait because it has preempted the code than can release
the semaphore. Scratch semaphores. The other alternative is
for the program to disable that signal before accessing the
shared structure. This is a little clunky, interferes with
multiplexed signals, but is the traditional Unix solution.
To make this more complex, depending on the Unix, certain system
and rtl calls cannot be called from signal state just to keep
you on your toes.
The existing InterBase event mechanism fineses the problem with
a design where a single asynchronous event can occur from an
event_que call, and the API requires the user to specify a
result buffer of the correct size (original and updated event
blocks are the same size). This is one reason that wildcarding
is difficult.
A much better mechanism for asynchonous process is threads. A
thread dedicated to listening for something to happen is much
simpler than a signal for two reasons. First, a thread can wait
for a data structure to become available. Second, operating system
designers smart enough to implement threads also design better
synchronization primitives than clunky Sys-V semaphores. Windows
supports a neat mechanisms called critical regions (one one thread
can enter a critical region) and pthreads supports mutexes (mutual
exclusions).
But, alas, not all systems support threads, and not all threading
packages work.
Obviously, any mechanism proposed for implementation in InterBase
must work in all environments.
The other problem to be solved is how a thread or signal handler
wakes up the main program waiting for a notification. Cutler
based operating systems (RSX, VMS, NT) use event flags that the
main line waits on and the handler set or clears. They work,
but are tricky to get right. Historically, as soon as Cutler goes
off to do something else, his successors add a simpler system
such HIBER (go to sleep) and WAKE (wake up). Since other things
can wake a process up prematurely, WAKE has to be assumed noisy.
The Unix guys tried and tried and tried to make something that worked
before pthreads showed up, without much to show for the effort. The
Sys-V philosophy was to interrupt "long" operations (socket i/o,
for example) everytime something happened (like any signal). This
lead to endless bugs where an unexpected signal kill a pending
i/o and the programmer didn't check the status correct. BSD tried
to correct this by automagically restarting the operations. Neither
worked particularly well (meaning lots of code all over to make anything
complex work). Again, the guys who designed pthreads built their
mutex mechanism to handle this situation gracefully.
So the problem of events isn't simply the design of a convenient
API, but designing a mechanism that can be implemented on all of
the current (and future) InterBase platforms.
The basic rule of safety is this: If you are planning to receive
something asynchronously, make sure you have someplace to put it.
The original event mechanism followed this rule. The messaging
system I proposed this morning does not. Implementing that messaging
system on a non-threading platform could be very, very tricky.
On uniprocessors one can design non-interlocked data structures
and access algorithms. On multi-processors all bets are off.
I guess the bottom line is something like this. If God had meant
computers to handle things asynchronously, he would have made
operating system designers smarter. Or something like that.
Jim Starkey