Subject Re: [Firebird-Java] Possible bug in Firebird 1.5.2 Classic (cross-post)
Author David Johnson
On Mon, 2005-08-01 at 08:16 -0700, David Jencks wrote:
> I realize I am butting my head in here where it may not be wanted
> and
> it is possible that firebird does not behave like other relational
> databases but....
>
> The access patterns needed for most queueing systems are rather
> different from what most relational databases are good at. Usually
> queueing systems can keep more or less up with demand, and keep all
> the
> messages in memory. In this case a rolling log based persistence
> scheme works really well: in the "send" transaction you write the
> incoming message to the log (as well as keeping it in memory) and in
> the "receive" transaction you write a token indicating that the
> message
> has been delivered. Some other thread that cleans up the log
> correlates the 2 log entries and discards both. This thread only
> has
> to actually process messages that haven't been delivered, usually a
> very small fraction of the total.
>
> You might want to look at activeMQ. It works as I described, putting
> the leftover messages into long term storage, usually a relational
> database. I'd be very interested to know how well it works using
> firebird as the long term storage.
>
> If your requirements don't fit jms very well, I'd be quite interested
> to know how your system performs.
>

I may yet wrapper this to be JMS compliant.

First performance test - I pushed 1 million messages through in 1,108
seconds. This averages to throughput of just under 1,000 messages per
second.

This is a little sluggish, but it is adequate performance for most
production systems.

Revamped in C++ and tied directly to the Firebird buffering layer, I
expect significant improvement because of elimination of duplicated
work.


Test system specifics:

Hardware: 800 MHz Pentium II desktop, IDE hard drive
OS: linux - Fedora core 1.5 with all patches current
JVM: Java 1.5.0 build 3
DBMS backend: Firebird 1.5.2 classic

Firebird used steady 60%+/- 2% of CPU throughout running test case. JVM
that included both the queues and the testcases used about 35% +/- of
CPU throughout the running of the test case.



Testing inconsistencies not allowed for:

Processor usage was shared with testcase software, Eclipse IDE, and X.
Once the wire protocol is solid I will re-run the testcase over the
wire. I expect that the CPU used by the testcases themselves
compensates for the CPU that was not used in translating the wire
protocol.

Server class hardware may improve throughput significantly.