Subject | RE: [IB-Architect] Re: Events Improvement |
---|---|
Author | Dmitry Yemanov |
Post date | 2001-05-07T17:18:24Z |
Jim,
will try to share some architectural details.
When you register an interest in a particular event with isc_que_events, the
server allocates one REQ (request) node in the shared memory. If there
weren't interests in this event before, the server allocates EVNT (event)
node, again in the shared memory. It contains some stuff, at the moment we
are interested in the event name and event count. Then, as far as I
understand, the server links REQ block to EVNT one through an additional
RINT (interest) block, which is also allocated in the same shared region if
neccessary. It was originally designed as is and it is still untouched.
In the original implementation, when the event is posted on transaction
commit by the deferred work manager, its name and count are passed to the
event manager. It compares the name against the list of EVNT blocks to find
the one, increments its count by the newly received count value, gets an
appopriate request and wakes up the delivering thread to transport the
request. The delivery code constructs EPB (event parameter buffer) from EVNT
blocks linked to the request and sends it to the client side. And note, that
the request block is freed after delivery.
In my design I wanted to save the existing scheme and format of EPB. So I
have introduced new APE (actually posted event) structure, which also
contains event name and count, and the list of APE blocks belongs to the
interest block. This list is allocated by the event manager during the match
of posted events against the registered masks, but this block is allocated
in the heap (I believe gds__alloc uses the standard C RTL heap management),
not the shared memory. Maybe it is not the best solution, but it is the
result of how I was thinking that time. One APE block is created for each
interest for the relatively short period of time until the request is
delivered and its block is freed, what causes all linked APE blocks to be
freed as well. So, at the delivery stage, we have the following information,
e.g:
~ standard part of EPB (the same as now) ~
- event 'A*' has been triggered and its count = 15 since beginning
- event 'B*' ...
~ detailed part of EPB ~
- event 'A1', that matches 'A*' mask, has been triggered 2 times during the
last transaction
- event 'A2', that matches 'A*' mask, has been triggered 4 times during the
last transaction
- event 'B1' ...
Once all events of the current transaction are posted through the event
manager and their APE blocks are linked to the interests, the request can be
delivered. All the above information goes to the client and is passed to the
AST function.
Cheers,
Dmitry
> If the wildcard string is entered into the event table is anIn my previous post I wanted to give you the semantics of my proposal. Now I
> explicit event with its own event count, it looks good me
> with a caveat or two on resource usage.
>
> In the original architecture (and, I believe, the current
> implementation) the event table is in System V shared memory.
> On most Unix systems, System V shared memory is implemented
> in non-paged (i.e. real) memory. To minimize the amount of
> memory required, the event manager was designed to track
> only events for which an attached process has declared an
> interest. Tracking only "interesting" events is no big deal.
> However, with you wildcarding mechanism, any event that matches
> an active wildcard must be tracked. Depending on the the
> application, this might increased to number of events tracked
> from a typical couple of dozen to hundreds of thousands,
> millions, or more. If an application needs the functionality,
> this may well be worth the cost. But the ramification on
> configuration and limits of shared memory size should be
> carefully explored.
>
> On the other hand, the super-server has no need for a shared
> memory based event manager. Ditching the shared memory mechanism
> would also eliminate the need to ackward region relative addresses
> necessary to keep the event manager data structure position
> independent. A faster, cheaper, more flexible event manager
> should be a dividend of the super server architecture.
>
> I rather like the wildcard scheme that preserves a valid
> event count but also provides the detailed events that triggered
> the wildcard event.
will try to share some architectural details.
When you register an interest in a particular event with isc_que_events, the
server allocates one REQ (request) node in the shared memory. If there
weren't interests in this event before, the server allocates EVNT (event)
node, again in the shared memory. It contains some stuff, at the moment we
are interested in the event name and event count. Then, as far as I
understand, the server links REQ block to EVNT one through an additional
RINT (interest) block, which is also allocated in the same shared region if
neccessary. It was originally designed as is and it is still untouched.
In the original implementation, when the event is posted on transaction
commit by the deferred work manager, its name and count are passed to the
event manager. It compares the name against the list of EVNT blocks to find
the one, increments its count by the newly received count value, gets an
appopriate request and wakes up the delivering thread to transport the
request. The delivery code constructs EPB (event parameter buffer) from EVNT
blocks linked to the request and sends it to the client side. And note, that
the request block is freed after delivery.
In my design I wanted to save the existing scheme and format of EPB. So I
have introduced new APE (actually posted event) structure, which also
contains event name and count, and the list of APE blocks belongs to the
interest block. This list is allocated by the event manager during the match
of posted events against the registered masks, but this block is allocated
in the heap (I believe gds__alloc uses the standard C RTL heap management),
not the shared memory. Maybe it is not the best solution, but it is the
result of how I was thinking that time. One APE block is created for each
interest for the relatively short period of time until the request is
delivered and its block is freed, what causes all linked APE blocks to be
freed as well. So, at the delivery stage, we have the following information,
e.g:
~ standard part of EPB (the same as now) ~
- event 'A*' has been triggered and its count = 15 since beginning
- event 'B*' ...
~ detailed part of EPB ~
- event 'A1', that matches 'A*' mask, has been triggered 2 times during the
last transaction
- event 'A2', that matches 'A*' mask, has been triggered 4 times during the
last transaction
- event 'B1' ...
Once all events of the current transaction are posted through the event
manager and their APE blocks are linked to the interests, the request can be
delivered. All the above information goes to the client and is passed to the
AST function.
> Nice piece of work, Dmitry.Don't you hurry? But thanks anyway ;-)
Cheers,
Dmitry