Subject | Re: [IB-Architect] Event datasets RFD |
---|---|
Author | Jim Starkey |
Post date | 2000-05-15T23:13:50Z |
At 03:11 PM 5/15/00 -0700, Bill Karwin wrote:
buffer requires the count of the event name, the event name, and a
4 byte event count. You can cram a great deal more than 20 events
into a 32K buffer.
The problem is not the design of the feature but the documentation. In
V4 somebody decided not to document the interface itself but only the
helper functions to build the event block dynamically. The helper
functions (not the real API) are the source of the alleged limitations.
Listening for 1,000 events is fully supported.
Bill, could you present a few scenarios on how your message
interface would be used?
The design goals of the event mechanism were:
1. To allow listening for a large number of events.
2. To allow a very large event name space.
3. To make events very specific to enable server side filtering.
4. To allow notification of many events with a single round trip.
5. To eliminate race conditions and dropped events.
6. To mesh with the InterBase transaction model.
Consider the cannonical commodities trading application. Among
the many possible models are these:
Model A: User waits on event "change" receiving notification
when the price of a commodity changes, the name of the commodity,
the old price, and the new price.
Model B: User waits on the commodities he's interested in:
soybeans, porkbellies, etc., receiving notification when
one or more changes.
Model A requires broadcasting events (and associated data) to
everyone interested in any commodity whatsoever. Load up those
unwanted and unloved notifications, and you have a vast amount
of network, server, and client resources totally wasted. Even
the guys that get a notification that they actually care about
have to go back into the database and start a transaction to
get the current value (the values in their que might have been
waiting behind a hundred useless messages). So what is the
value of giving them the price when it has to be assumed stale
when they get it? Very, very little.
Model B requires that clients be specific about what events they
want. The application can define named classes of events, if
required. If a ticker-driver needs notification of very broad
events, the application designer can define events like "stock-changed".
But only the guys that are interested in "stock-changed" get those
events.
Model B is necessary to implement Model A. The question is
whether the additional cost of resource usage of Model A is
justified. I believe the following are true:
1. Event data will lead the lazy programmer into unwise
program design, specifically assuming the event data
is current or useful. It is more coding to start a
a transaction and fetch the proper data than to use
the stale data in the event message. His program
get wrong results only in production, where there is
contention and delay, and never in testing.
2. A system that delivers event notifications is vastly
cheaper in resource utilization than message delivery.
Event notification scales to extremely high event rates,
extremely large event spaces, and large number of clients.
documentation.
at least) has a count associated with it. When an event is posted,
the count is incremented. The event block lists names of events
of interest and initial counts. When any event of interest is
posted, a return event block with updates counts is returned. Without
counts, nothing is returned.
A problem with your design is accounting for the lifetime of an
even message. When does the server consider it delivered? When
the server fetches the message from event manager? When the server
sends the message delivering the message to the client? When the
client receives the message? When it is queued on the client side?
When the client program does a successful fetch? What happens to
messages in the virtual statement that are unread? Does the client
send unread messages back to the server? Does it drop them on the
floor? What if two parts of the program have declare independent
interest in the events -- do both get the messages, or does one
get all of the messages and the other none (the server can't
tell that this is happening).
The existing mechanism to designed to handle all of these problems.
The counts live on the server. They can be delivered, correctly,
any number of times to any number of clients, or any number of
indepenent functions within a single client. If two guys want
independent notification, fine, it works. If a function unwinds
because of an exception, nothing is lost -- the previous event
block is safe to reuse without loss of data. The worst case,
even in the face of unexpected unwind is that redundant notification
occurs which will be detected on database access. It is simple,
fast, cheap, and robust.
Jim Starkey
>The event API supports a 32K event block. Each event block in the
>I'd be happy about that. The current event API calls are pretty awkward and
>difficult to use dynamically. For instance, one can't register interest in
>a list of events when you don't know the length of the list at compile-time
>(because the API uses varargs). And since the event counts are returned in
>the status-vector, you can't listen for more than 20 events.
>
buffer requires the count of the event name, the event name, and a
4 byte event count. You can cram a great deal more than 20 events
into a 32K buffer.
The problem is not the design of the feature but the documentation. In
V4 somebody decided not to document the interface itself but only the
helper functions to build the event block dynamically. The helper
functions (not the real API) are the source of the alleged limitations.
Listening for 1,000 events is fully supported.
Bill, could you present a few scenarios on how your message
interface would be used?
The design goals of the event mechanism were:
1. To allow listening for a large number of events.
2. To allow a very large event name space.
3. To make events very specific to enable server side filtering.
4. To allow notification of many events with a single round trip.
5. To eliminate race conditions and dropped events.
6. To mesh with the InterBase transaction model.
Consider the cannonical commodities trading application. Among
the many possible models are these:
Model A: User waits on event "change" receiving notification
when the price of a commodity changes, the name of the commodity,
the old price, and the new price.
Model B: User waits on the commodities he's interested in:
soybeans, porkbellies, etc., receiving notification when
one or more changes.
Model A requires broadcasting events (and associated data) to
everyone interested in any commodity whatsoever. Load up those
unwanted and unloved notifications, and you have a vast amount
of network, server, and client resources totally wasted. Even
the guys that get a notification that they actually care about
have to go back into the database and start a transaction to
get the current value (the values in their que might have been
waiting behind a hundred useless messages). So what is the
value of giving them the price when it has to be assumed stale
when they get it? Very, very little.
Model B requires that clients be specific about what events they
want. The application can define named classes of events, if
required. If a ticker-driver needs notification of very broad
events, the application designer can define events like "stock-changed".
But only the guys that are interested in "stock-changed" get those
events.
Model B is necessary to implement Model A. The question is
whether the additional cost of resource usage of Model A is
justified. I believe the following are true:
1. Event data will lead the lazy programmer into unwise
program design, specifically assuming the event data
is current or useful. It is more coding to start a
a transaction and fetch the proper data than to use
the stale data in the event message. His program
get wrong results only in production, where there is
contention and delay, and never in testing.
2. A system that delivers event notifications is vastly
cheaper in resource utilization than message delivery.
Event notification scales to extremely high event rates,
extremely large event spaces, and large number of clients.
>How about a new message API based on parameter buffers, like the servicesHey, Bill, it has always been this way. Complain to Borland
>API? This has the advantage of being easy to manipulate in multiple
>languages.
>
documentation.
>The existing API is based on event counts. Each event (in theory,
>isc_stmt_handle
>isc_message_dataset(
> (ISC_STATUS *) status,
> (char *) message_name);
>
>This maps a message name to a virtual statement handle, which one can then
>use to fetch datasets, as I described in my previous email.
>
at least) has a count associated with it. When an event is posted,
the count is incremented. The event block lists names of events
of interest and initial counts. When any event of interest is
posted, a return event block with updates counts is returned. Without
counts, nothing is returned.
A problem with your design is accounting for the lifetime of an
even message. When does the server consider it delivered? When
the server fetches the message from event manager? When the server
sends the message delivering the message to the client? When the
client receives the message? When it is queued on the client side?
When the client program does a successful fetch? What happens to
messages in the virtual statement that are unread? Does the client
send unread messages back to the server? Does it drop them on the
floor? What if two parts of the program have declare independent
interest in the events -- do both get the messages, or does one
get all of the messages and the other none (the server can't
tell that this is happening).
The existing mechanism to designed to handle all of these problems.
The counts live on the server. They can be delivered, correctly,
any number of times to any number of clients, or any number of
indepenent functions within a single client. If two guys want
independent notification, fine, it works. If a function unwinds
because of an exception, nothing is lost -- the previous event
block is safe to reuse without loss of data. The worst case,
even in the face of unexpected unwind is that redundant notification
occurs which will be detected on database access. It is simple,
fast, cheap, and robust.
Jim Starkey