Subject | Re: [firebird-support] Performance of events |
---|---|
Author | Daniel Albuschat |
Post date | 2009-12-15T13:53:01Z |
2009/12/15 Alan McDonald <alan@...>
yes I don't know whether the newly implemented events have anything to do
with the load I have been experiencing recently. But since the software that
uses events was deployed at friday, it was relatively likely.
Anyways, I'm absolutely positive about using events to create as little
traffic as possible. HOWEVER, because Firebird's events are limited to a
name only (and no additional information that I could pass), even this
creates much more traffic than necessary.
For example I can not listen to an event for newly created datasets. I can
fetch events for *any* new dataset in a table, but I wouldn't know the
primary key of that dataset.
Currently I have two scenarios:
1) A list of datasets is fetched. For each fetched dataset, an two events
are registered (only the ones that are displayed to the user, so this is
usually around 50 and should be 1.000 to 5.000 or so in the worst case). One
event is triggered on updates, the other on deletes. Updates will cause the
dataset to be re-fetched (by it's ID) and deletes will simply remove it from
the buffer.
What is missing is adding datasets to an existing list. This is especially
difficult because of various where-clauses that might be applied to the
list. And, of course, because of the limitation in events I have mentioned
earlier.
The other scenario is the one currently under development, but already
deployed on our local installation:
2) The software is meant to notify a user about newly assigned tasks in
real-time. There is a stored-procedure that returns all active tasks. A
database-event is triggered whenever this list changes. The software then
re-fetches the list (only the primary keys and update-time, though) and
compares it with the currently displayed list. Missing tasks are removed,
new tasks are added, tasks with a changed update-time are marked for update.
Now the data of new and marked tasks is fetched.
I think this is the most traffic- and load-saving algorithm that's possible.
Daniel
--
eat(this); // delicious suicide
[Non-text portions of this message have been removed]
>Hi Alan,
>
> > Hello,
> >
> > I'm currently widely adopting the usage of events to realize a live
> > update on all clients when data has been changed from another client.
> > Does anybody have experience with posting and registering for a lot of
> > events and it's impacts?
> > The current usage of event is one posted event every few seconds, with
> > at least one to five clients listening for that event. That's not much
> > and I am planning to add much more "event-traffic", but I've noticed
> > that fbserver utilized the CPU to 100% today and yesterday and I'm not
> > sure, why. Are events something meant to be used rarely, or is
> > something like 10 posted events per second with 100 different clients
> > listening to potentially hundreds of events per connection feasible?
> > Could it potentially slow down the server because of dificulties
> > regarding transaction isolation?
> >
> > Any hint, suggestion or experience would be highly appreciated.
> >
> > Greetings,
> >
> > Daniel Albuschat
>
> your 100% CPU usage may have nothing to do with event traffic or responding
> with query refreshing. It may be a sweep - this is another whole story.
> If you use events, try to keep the responding query refreshes small.
> Row level invalidation is good if you can do it.
> You may go thru a period of adding a lot of events, but when you're
> finiished you may realise that most of the events are making the clients
> respond in the same way each time. So fewer events do the same job.
> And yes,, many events causing clients to do a lot of stuff when really they
> shouldn't be doing them means a lot of network traffic for no particular
> gain.
>
yes I don't know whether the newly implemented events have anything to do
with the load I have been experiencing recently. But since the software that
uses events was deployed at friday, it was relatively likely.
Anyways, I'm absolutely positive about using events to create as little
traffic as possible. HOWEVER, because Firebird's events are limited to a
name only (and no additional information that I could pass), even this
creates much more traffic than necessary.
For example I can not listen to an event for newly created datasets. I can
fetch events for *any* new dataset in a table, but I wouldn't know the
primary key of that dataset.
Currently I have two scenarios:
1) A list of datasets is fetched. For each fetched dataset, an two events
are registered (only the ones that are displayed to the user, so this is
usually around 50 and should be 1.000 to 5.000 or so in the worst case). One
event is triggered on updates, the other on deletes. Updates will cause the
dataset to be re-fetched (by it's ID) and deletes will simply remove it from
the buffer.
What is missing is adding datasets to an existing list. This is especially
difficult because of various where-clauses that might be applied to the
list. And, of course, because of the limitation in events I have mentioned
earlier.
The other scenario is the one currently under development, but already
deployed on our local installation:
2) The software is meant to notify a user about newly assigned tasks in
real-time. There is a stored-procedure that returns all active tasks. A
database-event is triggered whenever this list changes. The software then
re-fetches the list (only the primary keys and update-time, though) and
compares it with the currently displayed list. Missing tasks are removed,
new tasks are added, tasks with a changed update-time are marked for update.
Now the data of new and marked tasks is fetched.
I think this is the most traffic- and load-saving algorithm that's possible.
Daniel
--
eat(this); // delicious suicide
[Non-text portions of this message have been removed]