Subject Re: [Firebird-Java] Possible bug in Firebird 1.5.2 Classic (cross-post)
Author David Johnson
At this point, this is an exploratory proof of concept exercise.

The queue only hits the backing store when the message is posted and
removed, or if the queue depth exceeds the memory cache depth
(configurable by queue). The backing store is there primarily for
recovery and spike loading situations.

A "real" system would build upon the baseline architecture, but be
embedded as a service within a build of the Vulcan engine behind the Y-
valve (whenever it is ready for prime time and I have sufficient
interest and time to brush up on my C/C++). This is similar to the
manner in which IBM has tied MQ Series to the DB2 engine. Since Jim has
seen fit to emulate JDBC for the underlying engine classes in Vulcan,
the Java POC should be readily translatable when the Vulcan rebuild is
done.

Working behind the Y-valve in C/C++ (eventually) may allow better
integration with the engine memory cache.

Since, as people have noted, the queue depth should normally be very
low, the count of pages in a table should be low, so pages would
typically be cached in memory. I am still not intuitively familiar with
the intensiveness of GC operations in firebird, so this will definitely
be a learning experience.

The POC is not intended to be a production system. It should be robust
enough for production, but I am not banking on its performance.

Ann noted in the support thread that there is a limited number of table
handles available (15 bit integer). It is only in the test case that I
expect to create and destroy tables regularly enough to run out of table
handles. The testcase can (will) be adapted to create and destroy the
entire backing database, so I don't expect table handle count to be an
issue.

I am working on packaging a minimum test case, which I will email
directly to interested parties as a zip archive of java source.

Thanks!
David Johnson

On Mon, 2005-08-01 at 08:52 -0700, David Jencks wrote:
>
> On Aug 1, 2005, at 8:31 AM, Roman Rokytskyy wrote:
>
> > > I realize I am butting my head in here where it may not be
> wanted
> > and
> > > it is possible that firebird does not behave like other
> relational
> > > databases but....
> > >
> > > The access patterns needed for most queueing systems are rather
> > > different from what most relational databases are good at.
> Usually
> > > queueing systems can keep more or less up with demand, and keep
> all
> > the
> > > messages in memory. In this case a rolling log based persistence
> > > scheme works really well: in the "send" transaction you write the
> > > incoming message to the log (as well as keeping it in memory) and
> in
> > > the "receive" transaction you write a token indicating that the
> > message
> > > has been delivered. Some other thread that cleans up the log
> > > correlates the 2 log entries and discards both. This thread
> only
> > has
> > > to actually process messages that haven't been delivered, usually
> a
> > > very small fraction of the total.
> >
> > I agree with you here :). I would think that Firebird will perform
> > even
> > worser than other (pessimistic) engines. If the pattern above is
> > implemented, the main outcome most likely would be a permanent
> garbage
> > collection of the deleted messages and what is left will be used
> for
> > message
> > delivery.
> >
> > So, the only "advantage" of using Firebird I see is its XA-ness
> which
> > allows
> > to create transactional queries without much programming. Maybe
> it's
> > worth
> > it?
> >
>
> You can implement xa with this kind of rolling log by using 4 log
> records: message send prepare, message send commit, message deliver
> prepare, message deliver commit. (activeMQ supports XA, roughly this
> way). Also, most messaging experts seem to try to avoid xa, instead
> preferring to use correlation ids and constructing their system to
> detect and compensate for message redelivery due to failures. I have
> my doubts about how appropriate this is, but then I don't know how to
> make messaging systems go fast :-)
>
> thanks
> david jencks
>
> > Roman
> >