Subject Re: [IBO] TIB_DSQL
Author Helen Borrie
At 09:40 AM 18/01/2005 +1100, you wrote:

> > It's a problem of transaction control. Destroying the statement object
> > will get rid of the statement resources and immediately recreate them as
> > soon as you recreate and prepare the object, so it's a transitory saving
> > that just uses more resources on the client. It won't do anything to
> clean
> > up the transaction resources that are building up on the server from soft
> > commits.
>
>I've been monitoring the virtual memory usage of both the client app and
>the server, during the processing fbserver sits at around 20-25Mb,
>whilst the vm usage of the client grows to enormous proportions.
>
>Could this still be a transaction control problem even though the server
>is not constantly increasing its memory usage?

Now there's a 64-million dollar question. Yesterday it was the server that
was showing the memory growth. Today it is the client. Still, at least it
brings you closer to isolating the memory eater.

>this problem is "something else" and we should look at
> > (a trimmed please!! not 81 ParamByName assignments!! version of) your code
> > and the dfm text for your DSQL object.
>
>
> DSQL DFM code, transaction is being assigned before the iteration
>through the xml.
>
> TIB_DSQL *sSQL;
>
> sSQL = new TIB_DSQL(this);
> sSQL->Name = "sSQL";
> sSQL->DatabaseName = "mbclient.gdb";
> sSQL->IB_Connection = newConn;
>
>
> Code:
>
> if(UploaderTest->sSQL->Prepared)
> {
> UploaderTest->sSQL->Unprepare();
> }
> UploaderTest->sSQL->SQL->Clear();
> UploaderTest->sSQL->SQL->Add(sqlStr); // Generated SQL statement
> try
> {
> UploaderTest->sSQL->Prepare(); // Memory Jump Here
> }
> ...
>
>The sqlStr is generated by selecting the RDB$FIELD_NAME from
>RDB$RELATION_FIELDS and generating the relavent insert or update code
>depending on the parameters in the xml.

There are big holes here. We don't see the loop.

Also - is it the case that each XML document potentially loads data into a
different table?

If yes, then a costly operation like the above is probably unavoidable, as
long as it is your design choice to re-use the same object over and
over. There's too much potential here for a wrong step to cause a leak.
I'd want to cleanly hard commit the transaction and destroy the DSQL object
when each XML document is finished with, unless there is some (so far
invisible) rationale for your approach.

If no, then this costly operation can't be justified. Create the dsql
object once, set it up, and keep it prepared until you don't need it any
more. This too has the potential to cause leaks, which some
strategically-placed try...finally blocks will go a long way towards
eliminating.


>For the majority of the inserts and updates there is only a tiny jump in
>the memory usage of the client when the dsql is prepared, it is only the
>large records coming down that cause the huge jump in memory.

If the problem looks like a memory leak in your app (a gazillion stream
objects floating around in VM,? or large string variables doing likewise?)
then we don't have enough info so far to identify the hotspots.

Inspect closely what you are doing throughout the cycle: where the XML doc
is coming from (a blob? a COM server? a file?), how you are managing it,
what you are doing with the doc (streams, substrings, ticker variables,
etc.), whether you are trying to update the same rows that were inserted
earlier in the cycle, blah, blah, blah.)

Do you have a resource tracker that you can run over the client app? It
makes life a lot simpler when memory leaks beep at you...

Helen