Subject RE: [firebird-support] Slow performance with Index
Author Edwin A. Epstein, III
Hmmm. Very interesting idea. The DNC_QUE field itself would no longer be
indexed at all. Then according to your idea a trigger would fire off a UDF
designed to add it to a seperate que management system that would take care
of the export, import, and processing of the result files. It would then
run updates against the database updating the DNC_QUE field back to '0' and
updating some other fields at the same time.

I don't think it might have to be purchased. Maybe even something like
MySQL and some PHP that gets triggered every 10 minutes to do the exporting,
importing, and processing. All I really need is a UDF that can perform an
insert on a MySQL database.

Thanks for the idea! :)

-----Original Message-----
From: David Johnson [mailto:d_johnson@...]
Sent: Thursday, February 03, 2005 6:40 PM
To: firebird-support@yahoogroups.com
Subject: RE: [firebird-support] Slow performance with Index



On Thu, 2005-02-03 at 20:03, Edwin A. Epstein, III wrote:
>
>
> It is necessary to process these in batch mode. I have several SP's that
> will set the process flag based upon multiple different criteria in the
> database.


> These SP's run in a single transaction and can flag as many as
> 250K to 300K on their own. So semi-continuous scanning of the flag field
is
> sort of already being done. Its just that the amount of flags that are
set
> at any one time is already fairy huge.

This gives some good parameters.

> We take the batch of records and
> send them out via FTP and then check the FTP every 5-10 minutes for a new
> result file. This file is then manually processed into the database
(going
> to be automatic shortly). This processing happens fairly fast, about 700+
> updates per second to the database. So it is usually always under an hour
> for the processing results to be updated into the database.

>
> I have had problems with a 2nd table. It gets "thrashed" so often that it
> begans to severely impact its performance rather quickly.

Have you considered using a queueing product such as MQ series from IBM
(or one of its competitors)?

> So many
> inserts,deletes, triggers, etc. It causes a lot of garbage collection to
be
> running and has impacted the backup performance. It was, however, the
> fastest at being exported.

Hmmm ... This sounds like a job for MQ (or one of its competitors). I
believe that you may need to refactor this process to a "send and
forget" model.

Simplisticly, on update in the DBMS, each row in this table submits
itself to a queue via a trigger, and then your database is out of the
picture. The queue server, which may run on this or another machine,
handles the export in real time, then your FTP batch process picks up
the file as the queue manager detects that conditions are met and
signals for transmission.

This solution takes the "garbage collection" for the second table out of
the DBMS, since the data is posted to the queue by the trigger on the
table. Queue's don't generally get thrashed unless you roll back
transactions frequently - they take data in one end and put it out the
other, serializing the transactions by time of submission. There is a
performance hit, but a good queue manager should keep the hit well below
the hit you are already taking for DBMS I/O.

If the flag is only required to indicate which rows are to be processed,
a queueing mechanism would replace the "update flag" SQL with "select",
and then throw each row out of the DBMS to the queue manager. The queue
client will then be responsible for translating the rows into the format
preferred by your vendor, and transmitting the resultant file via FTP.

With queueing mechanisms in the correct places, you can pipeline your
processes to make full use of CPU and I/O channels.

Perhaps Jim, Ann, or Helen know of a queue manager built on the Firebird
engine, much like MQ builds on the DB2 engine?





Yahoo! Groups Links