Subject Re: [firebird-support] Re: Blob Parsing Stored Procedure
Author Svein Erling Tysvaer
Hi again, Luke!

So far, I'm just full of questions. 110 seconds to do what? Insert one
record or 110000 records? And did you manage to go from 110 seconds to
0.2 second to do the very same thing?

I've never used ODBC (others on this list know about that), but using
IBO (with Delphi) to prepare once, then execute many times, I've moved
data from one Firebird 1.5.2 database to another at about 4000-6000
records per second without caring too much about optimization. I think
both databases were on the same server, it would of course be slower if
I had to transport things over a slow network.

What I normally do, is something like:

IB_DSQL1.SQL.Add('INSERT INTO MyTable(Field1, Field2, Field3) VALUES
:Param1, :Param2, :Param3');
while not <SomeLoop> do

At least with IBO, using ParamByName is considerably slower than Param,
and using an IB_Query would be slower than using IB_DSQL.

If you could show some example of what you do (and tell whether you do
this over a slow line or on the same machine as the database), then
maybe someone here will be able to tell whether you're doing things
optimally, or if there are flaws in your approach.


lucas_jessee wrote:
> Well I'm seeing about 1000 per second in my ideal test. For some
> reason in my "real-world" example it's a bit slower than that. To
> give you an idea of relative performance, initially by repreparing my
> statement each time it was taking 110 seconds to achieve a certain
> operation. By altering the code to reuse the same statement without
> preparing/binding again I was able to decrease the time to 70 seconds.
> 70 seconds was still a long time to wait in my GUI based application...
> So I thought to myself, what if I just serialize this entire thing and
> store it as a single BLOB on the parent table (that the FK of the
> smaller table points to). So I serialized everything down into one
> large binary blob (similar to what would be passed into the stored
> procedure I described). My total time dropped to 0.2 seconds.
> Now the problem is that this approach is poor from a DB architecture
> point of view. Applications may want to search through this table and
> query based on the values, and I'm flattening them all to a single BLOB.
> I do have auto-commit turned off also, I commit when my application
> terminates. And...for now...I only have one application connected at
> a time. And I share a single connection throughout the entire
> application as well. The code is organized very well in OO C++.=20
> Also, I'm using the ODBC driver if that makes a difference to any of you.
> Luke
> --- In, Svein Erling Tysv=E6r
> <svein.erling.tysvaer@...> wrote:
>> Hi Luke!
>> =20
>>> I've got a lot of small inserts to do and its just taking much longer
>>> than I'm willing to wait for (even with an embedded DB). Basically
>>> these inserts are small records (roughly 4 BIGINT's) that have a FK to
>>> another larger table.
>> =20
>> What is 'a lot of small inserts'? Are we talking millions? How are
> you inserting and how many records per second do you achieve?
>> =20
>> Generally, Firebird is good at executing stuff once things are
> prepared in a transaction. A few thousand inserts per second is often
> feasible. Preparing for each repetition will slow down things
> considerably, and starting a new transaction or connection for each
> repetition will make things unbearably slow.
>> =20
>> Set