Subject | Re: [IBO] Prepared Statements and Memory Usage |
---|---|
Author | Helen Borrie |
Post date | 2005-04-12T03:07:33Z |
Hello Ryan,
At 12:23 PM 12/04/2005 +1000, you wrote:
need a long answer. :-)
structures that are needed to instantiate the statement and prepare the
XSQLDA structure for transporting the parameter data across to the
server. Amongst the many pieces of this task is to prepare one XSQLVAR for
*each* parameter. Each XSQLVAR is made of several pieces, concerning data
type, size, etc., as well as a buffer large enough to carry data of the
maximum defined size. 82 parameters is one heck of a lot of XSQLVARs!! If
a lot of these are character types, as one supposes they might be if you
are processing XML, then that's simply one heck of a lot of memory that has
to be allocated.
If you are iterating through a loop, it is way far more economical to keep
the statement prepared and recycle the allocated memory, than it is to
unprepare it all (freeing that memory) and then re-do it in the next cycle
of the loop.
with, IB_Query is a poor choice for iteratively processing a DML
statement. IB_Cursor would use less resources and IB_DSQL the least of
all. Is there some particular reason you have to use an IB_Query?
Now, whilst you can save resources by choosing the correct component for
submitting the DML request, you can't reduce the amount of memory that is
allocated for executing the statement. A parameter has to be large enough
to take the largest possible parameter value presented to it; so, if this
is the way you have to perform this task, you are very much stuck with the
consequences.
is certainly going to tickle RAM.
be 11 + 2 + 1, i.e. 13 bytes. That's somewhat shorter than some other
types potentially would be. What do you see if you trace the value of
tmpStr at this point, when dealing with this particular data type?
local or remote, i.e. you don't say whether the client is sharing resources
with the server.
Another question I'm inclined to ask is why you need to do this processing
as a "client-side" task at all. I'm asking that, since you didn't say
whether the XML source text was coming from the output of a database query,
or by parsing an input file...
--- if the former, then a stored procedure with a couple of nested loops
would be far more efficient that what you're doing now
-- if the latter, then the question arises of where the "hit" of file i/o
is being taken.
regards,
Helen
At 12:23 PM 12/04/2005 +1000, you wrote:
> Hi All,I saw your earlier posting and thought it looked like one that was going to
>I posted this problem a while ago and then got side-tracked.
need a long answer. :-)
>We have a process iterating through some xml tabledata, Insert statementsThat's exactly what Prepare does. It literally *prepares* the various
>are generated for each record on the iteration over the xml doc, using a
>TIB_Query component.
>
>The issue we are having is that when we have a large table (82 columns) the
>memory usage skyrockets when we call Prepare() on the query (about 15-23Mb
>for each call). This memory is not freed until we kill the process.
structures that are needed to instantiate the statement and prepare the
XSQLDA structure for transporting the parameter data across to the
server. Amongst the many pieces of this task is to prepare one XSQLVAR for
*each* parameter. Each XSQLVAR is made of several pieces, concerning data
type, size, etc., as well as a buffer large enough to carry data of the
maximum defined size. 82 parameters is one heck of a lot of XSQLVARs!! If
a lot of these are character types, as one supposes they might be if you
are processing XML, then that's simply one heck of a lot of memory that has
to be allocated.
If you are iterating through a loop, it is way far more economical to keep
the statement prepared and recycle the allocated memory, than it is to
unprepare it all (freeing that memory) and then re-do it in the next cycle
of the loop.
>In doing some debugging, I have traced the memory usage to theIt really gets down to the way you are doing this processing. To begin
>IB_Components.pas file (starting line 22038) where the parameters are
>iterated through (I have included the loop below). I don't know if this is a
>problem with the IBO code (I hope that if it was then it would have been
>discovered and dealt with by now) or the way we are using the queries.
with, IB_Query is a poor choice for iteratively processing a DML
statement. IB_Cursor would use less resources and IB_DSQL the least of
all. Is there some particular reason you have to use an IB_Query?
Now, whilst you can save resources by choosing the correct component for
submitting the DML request, you can't reduce the amount of memory that is
allocated for executing the statement. A parameter has to be large enough
to take the largest possible parameter value presented to it; so, if this
is the way you have to perform this task, you are very much stuck with the
consequences.
>On stepping through this loop (very painful for 82 parameters!) I found thatAs expected. Taking your 82-parameter statement from nil allocations to 82
>the memory jumps at the one point indicated below in the code (at about the
>71st parameter - its a Decimal(9,2)).
>
>For the life of me I cannot see how this can cause the jump in memory usage,
>if anyone has any ideas they'd be greatly appreciated.
>
>We are using C++ Builder and IBObjects v.4.5Ai, this problem has only
>surfaced since we updated our version of IBObjects (currently rolling back
>to the old version).
>
>The sqlInsertStr is generated as: "INSERT INTO MBPROVIDERITEM
>VALUES(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?
>,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?
>,?,?,?,?,?,?,?,?,?)"
>
>/* Our code for the query
>*/
>
>qrUpdateInsert->SQL->Clear();
>qrUpdateInsert->SQL->Add(sqlInsertStr);
>qrUpdateInsert->Prepare(); <------------ Memory jumps here
is certainly going to tickle RAM.
>IB_Components.pas [22038]:The length of this temporary string at this point for a decimal(9,2) will
> for ii := 0 to ParamCount - 1 do
> try
> with Params[ii] do
> if not IsBlob and not IsArray then
> begin
> tmpInt := FOldParamValueLinks.LinkIndex[ BestFieldName ];
> if tmpInt <> -1 then
> begin
> tmpStr := FOldParamValueLinks.IndexValues[ tmpInt ];
> if IsNull or ( AsString <> tmpStr ) then
> try
> if IsDateTime and ( tmpStr <> '' ) then
> AsDateTime := EncodeStringToDateTime( tmpStr )
> else
> begin
> if IsCurrencyDataType then
> tmpStr[Length( tmpStr ) + SQLScale] :=
>DecimalSeparator;
>------------------> AsString := tmpStr; <------------ Memory Jumps Here
be 11 + 2 + 1, i.e. 13 bytes. That's somewhat shorter than some other
types potentially would be. What do you see if you trace the value of
tmpStr at this point, when dealing with this particular data type?
> end;One thing that's not clear in your description is whether the client is
> except
> Clear;
> end;
> FOldParamValueLinks.Delete( tmpInt );
> end;
> end;
> except
> // Toss any exception here.
> end;
local or remote, i.e. you don't say whether the client is sharing resources
with the server.
Another question I'm inclined to ask is why you need to do this processing
as a "client-side" task at all. I'm asking that, since you didn't say
whether the XML source text was coming from the output of a database query,
or by parsing an input file...
--- if the former, then a stored procedure with a couple of nested loops
would be far more efficient that what you're doing now
-- if the latter, then the question arises of where the "hit" of file i/o
is being taken.
regards,
Helen