Subject RE: [firebird-support] Insertion speed
Author Todd Pinel
Thanks for all the help and tips. I have to admit that my simple query
wasn't quite as simple as originally stated. What it was trying to achieve
was something like this
FOR
SELECT BP.BUYER_NO
FROM BUYERPUBLICATION BP
WHERE BP.PUBLICATION_NO = :PUBLICATION_NO
AND NOT EXISTS(SELECT IM.BUYER_NO
FROM ISSUESMAILED IM
WHERE IM.BUYER_NO = BP.BUYER_NO
AND IM.ISSUE_NO = :ISSUE_NO)
INTO :BUYER_NO
DO
BEGIN
//INSERT BUYER_NO INTO OTHER TABLE
END

Selecting all buyers that are assigned to a certain publication and have not
already been mailed an issue(do not already exist in the issuesmailed table
for a certain issue), then insert that set into the issuesmailed table.

I found the not exists clause was really dragging the query time down so I
tried another approach using a left outer join but that didn't improve the
speed at all, it basically performed the same

FOR
SELECT BP.BUYER_NO
FROM BUYERPUBLICATION BP LEFT OUTER JOIN ISSUESMAILED IM
ON (BP.BUYER_NO = IM.BUYER_NO AND IM.ISSUE_NO = :ISSUE_NO)
WHERE BP.PUBLICATION_NO = :PUBLICATION_NO
AND IM.ISSUESMAILED_NO IS NULL
INTO : BUYER_NO
DO
BEGIN
//INSERT BUYER_NO INTO OTHER TABLE
END

When I run these queries alone by themselves, through the IBExpert console
and compare some statistics between the two they use close to the same plan
it looks like to me, except the one is joining

Not Exists Version
Adapted Plan
PLAN (IM INDEX (INTEG_132,INTEG_177)) PLAN (BP INDEX
(FK_BUYERPUBLICATIONPUBLICAT))

Left Outer Join Version
Adapted Plan
PLAN JOIN (BP INDEX (FK_BUYERPUBLICATIONPUBLICAT),IM INDEX
(INTEG_132,INTEG_177))


I ran the stored procedure that called these 2 versions of querying for the
insertion set (on our production server, better hardware then my development
comp) and got the following results

~50s calling the stored proc. using the Left Outer Join query to build the
insertion set and inserting the records
~40s calling the stored proc. using the Not Exists query to build the
insertion set and inserting the records

Also I changed the query to be just a straight collection of buyer_no's
Ie)
SELECT BP.BUYER_NO
FROM BUYERPUBLICATION BP
WHERE BP.PUBLICATION_NO = :PUBLICATION_NO

And the result was roughly 4s to collect and insert all records into the
other table.

So I guess now I can see that better hardware definitely helped in the
speed, but I am still used to most operations being lightning fast using
FireBird for a similar # of records. The receiving table does have several
indices but these are not user-defined they are in place from foreign keys I
have added to that table (if I'm understanding user-defined correctly) and
this kind of routine is run fairly infrequently. Also the transaction is
not commit-retaining it is just a straight commit.

I was also interested by the statement made by Helen to break this into 2
transactions and hard commit between and after. Makes sense, although I am
running this routine inside a stored procedure and I am not sure if I can
control this within the context of my stored proc. (I checked Helen's book
and it looks like I can't). This is something I would have to control from
my application? Or is there some way I could achieve this from within the
context of the stored proc? Last but not least am I just spoiled by using
such a great DB ;) or is there any way to bring the Not Exists query closer
to the performance of just a straight query (~40s closer to the ~4s)?? The
only reason I worry is that although 40s does not seem too long we are going
to be dealing with record sets in the range of 100 000's of records soon and
I'd like to figure out what I'm doing wrong, or what I can do better in
advance. Thanks Todd.


_____

From: Helen Borrie [mailto:helebor@...]
Sent: Monday, February 07, 2005 7:33 PM
To: firebird-support@yahoogroups.com
Subject: Re: [firebird-support] Insertion speed

At 03:09 PM 7/02/2005 -0700, you wrote:

>Hi I was wondering what kind of performance is normal to expect when
>inserting multiple records into Firebird? The scenario I have right now is
>I call a stored procedure that runs a For..Select to gather a list of
people
>that need to be inserted into another table. This query is simple
basically
>select all people with this attribute. I have in the past been able to
>optimize this For..Select query to improve speed but this time I'm
wondering
>if it's just the # of records I am trying to insert which is taking so
long.
>There are no other stored procedures or triggers involved in this procedure
>andI insert roughly 15000 records and its taking about 1.5 minutes. Is
>there someway or technique I could use to improve this speed?

Yup. Break this into two transactions and hard-commit between the two and
after the second. The recommended "start point" for determining the
optimal maximum number of statement executions in a transaction is ~8000.

If there are a lot of user-defined indexes in the receiving table and the
table is a fast-gowing one and you are doing these inserts regularly, then
you could consider doing them in times when other users are offline, and
setting indexes inactive for the duration.

Make sure that you don't run these bulk inserts in a commit retaining
transaction.

./hb







Yahoo! Groups Sponsor


ADVERTISEMENT

<http://us.ard.yahoo.com/SIG=1291oh1p4/M=298184.6018725.7038619.3001176/D=gr
oups/S=1705115386:HM/EXP=1107916401/A=2532114/R=2/SIG=12k2fk88v/*http:/clk.a
tdmt.com/NFX/go/yhxxxnfx0020000014nfx/direct/01/&time=1107830001067021>


<http://view.atdmt.com/NFX/view/yhxxxnfx0020000014nfx/direct/01/&time=110783
0001067021>


<http://us.adserver.yahoo.com/l?M=298184.6018725.7038619.3001176/D=groups/S=
:HM/A=2532114/rand=347897331>

_____

Yahoo! Groups Links
* To visit your group on the web, go to:
http://groups.yahoo.com/group/firebird-support/

* To unsubscribe from this group, send an email to:
firebird-support-unsubscribe@yahoogroups.com
<mailto:firebird-support-unsubscribe@yahoogroups.com?subject=Unsubscribe>

* Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service
<http://docs.yahoo.com/info/terms/> .


[Non-text portions of this message have been removed]