Subject | Re: Blob Parsing Stored Procedure |
---|---|
Author | lucas_jessee |
Post date | 2007-07-03T15:07:30Z |
Well I'm seeing about 1000 per second in my ideal test. For some
reason in my "real-world" example it's a bit slower than that. To
give you an idea of relative performance, initially by repreparing my
statement each time it was taking 110 seconds to achieve a certain
operation. By altering the code to reuse the same statement without
preparing/binding again I was able to decrease the time to 70 seconds.
70 seconds was still a long time to wait in my GUI based application...
So I thought to myself, what if I just serialize this entire thing and
store it as a single BLOB on the parent table (that the FK of the
smaller table points to). So I serialized everything down into one
large binary blob (similar to what would be passed into the stored
procedure I described). My total time dropped to 0.2 seconds.
Now the problem is that this approach is poor from a DB architecture
point of view. Applications may want to search through this table and
query based on the values, and I'm flattening them all to a single BLOB.
I do have auto-commit turned off also, I commit when my application
terminates. And...for now...I only have one application connected at
a time. And I share a single connection throughout the entire
application as well. The code is organized very well in OO C++.
Also, I'm using the ODBC driver if that makes a difference to any of you.
Luke
--- In firebird-support@yahoogroups.com, Svein Erling Tysvær
<svein.erling.tysvaer@...> wrote:
feasible. Preparing for each repetition will slow down things
considerably, and starting a new transaction or connection for each
repetition will make things unbearably slow.
reason in my "real-world" example it's a bit slower than that. To
give you an idea of relative performance, initially by repreparing my
statement each time it was taking 110 seconds to achieve a certain
operation. By altering the code to reuse the same statement without
preparing/binding again I was able to decrease the time to 70 seconds.
70 seconds was still a long time to wait in my GUI based application...
So I thought to myself, what if I just serialize this entire thing and
store it as a single BLOB on the parent table (that the FK of the
smaller table points to). So I serialized everything down into one
large binary blob (similar to what would be passed into the stored
procedure I described). My total time dropped to 0.2 seconds.
Now the problem is that this approach is poor from a DB architecture
point of view. Applications may want to search through this table and
query based on the values, and I'm flattening them all to a single BLOB.
I do have auto-commit turned off also, I commit when my application
terminates. And...for now...I only have one application connected at
a time. And I share a single connection throughout the entire
application as well. The code is organized very well in OO C++.
Also, I'm using the ODBC driver if that makes a difference to any of you.
Luke
--- In firebird-support@yahoogroups.com, Svein Erling Tysvær
<svein.erling.tysvaer@...> wrote:
>you inserting and how many records per second do you achieve?
> Hi Luke!
>
> > I've got a lot of small inserts to do and its just taking much longer
> > than I'm willing to wait for (even with an embedded DB). Basically
> > these inserts are small records (roughly 4 BIGINT's) that have a FK to
> > another larger table.
>
> What is 'a lot of small inserts'? Are we talking millions? How are
>prepared in a transaction. A few thousand inserts per second is often
> Generally, Firebird is good at executing stuff once things are
feasible. Preparing for each repetition will slow down things
considerably, and starting a new transaction or connection for each
repetition will make things unbearably slow.
>
> Set
>
>
> [Non-text portions of this message have been removed]
>