Subject | Re: Slow performance with Index |
---|---|
Author | vandy899 |
Post date | 2005-02-04T03:52:38Z |
--- In firebird-support@yahoogroups.com, "Edwin A. Epstein, III"
<eepstein@c...> wrote:
client's dbs, only using events instead, OTOH they do not do 1.65
inserts per second (more like 1 insert per minute or so). Are there
performance problems with events?
Cheers,
Michael
<eepstein@c...> wrote:
> It is necessary to process these in batch mode. I have several SP'sthat
> will set the process flag based upon multiple different criteria in themany as
> database. These SP's run in a single transaction and can flag as
> 250K to 300K on their own. So semi-continuous scanning of the flagfield is
> sort of already being done. Its just that the amount of flags thatare set
> at any one time is already fairy huge. We take the batch of records anda new
> send them out via FTP and then check the FTP every 5-10 minutes for
> result file. This file is then manually processed into the database(going
> to be automatic shortly). This processing happens fairly fast,about 700+
> updates per second to the database. So it is usually always underan hour
> for the processing results to be updated into the database.that it
>
> I have had problems with a 2nd table. It gets "thrashed" so often
> begans to severely impact its performance rather quickly. So manycollection to be
> inserts,deletes, triggers, etc. It causes a lot of garbage
> running and has impacted the backup performance. It was, however, thethat
> fastest at being exported.
>
> -----Original Message-----
> From: David Johnson [mailto:d_johnson@c...]
> Sent: Thursday, February 03, 2005 5:56 PM
> To: firebird-support@yahoogroups.com
> Subject: RE: [firebird-support] Slow performance with Index
>
>
>
> Suggestion 1. Is it necessary to process these in batch mode? If your
> vendor can process these in real time you can distribute the bottleneck
> over time, and it will be less apparent.
>
> In real time, as records are written to the table (or possibly a
> separate process pending table), they are processed and cleared (or
> removed). To process 2 million records per day, you need to sustain a
> rate of only 23 to 24 records per second. This is readily attainable on
> wintel servers, and even desktops. Your application handles about 1/5th
> this much traffic, so this performance may represent a good spike
> solution.
>
> In the case that you write to a process pending table, no flag is
> required. The existence of a record in the process pending table is
> evidence that processing is required.
>
> Suggestion 2. Have a trigger write updated records to an external table
> for later processing. This can work with either batch model or real
> time model.
>
> In batch model, the external file is simply fired off to the vendor for
> processing and cleared. For performance reasons, you may wish to zip
> the file before sending it off.
>
> In this scenario, no flag is required. The existence of a row in the
> external table is evidence of a need for processing.
>
> Suggestion 3. Have a second process that runs in its own thread
> continually scan the table "as-is" for items to process, append them to
> a flat file, and update the flag. The existing index would buy you a
> fair bit in this scenario, since you would only need to pull out 23 to
> 24 records per pass, assuming 1 second intervals between passes.
>
> On Thu, 2005-02-03 at 11:35, Edwin A. Epstein, III wrote:
> >
> >
> > >>First, no professional programmer would ever send 7 million records
> > >>across a network to a client application, so I'm going to assume
> > >>whatever you're doing is happening on the server.100K and 2
> >
> > Welllll. Not 7 million records across the network but between
> > million records across the network. I don't really see that I have aprocessing.
> > choice. I have to export those records one way or another for
> Iquery the
> > am using a 100 mbit network and a machine right next to the server to
> obtain
> > these records. I am up for suggestions on the fastest way.
> >
> > There are huge number of duplicates. I will not ever have to
> > database for records with a flag value = 0, only records with aflag value
> =database. An
> > 1.
> >
> > According to your definition I have an extremely volatile
> > average of 1 million records per week being inserted into thedatbase. In
> > addition to the insert, updates are happening on the records inabout the
> > same amount. The DNC_QUED field I am using to indicate that therecord
> > needs further processing outside of the server. This does nothappen only
> > one time. This may happen multiple times on the same record in amonthly
> > period.I personally have implemented something like suggestion 3 in one of my
> >
>
>
>
>
>
> Yahoo! Groups Links
client's dbs, only using events instead, OTOH they do not do 1.65
inserts per second (more like 1 insert per minute or so). Are there
performance problems with events?
Cheers,
Michael