Subject Re: [IBO] TIB_DSQL
Author Ryan Thomas
> It's a problem of transaction control. Destroying the statement object
> will get rid of the statement resources and immediately recreate them as
> soon as you recreate and prepare the object, so it's a transitory saving
> that just uses more resources on the client. It won't do anything to clean
> up the transaction resources that are building up on the server from soft
> commits.

I've been monitoring the virtual memory usage of both the client app and
the server, during the processing fbserver sits at around 20-25Mb,
whilst the vm usage of the client grows to enormous proportions.

Could this still be a transaction control problem even though the server
is not constantly increasing its memory usage?

> Now, all that said, if you *are* taking care to explicitly start and commit
> transactions, then this problem is "something else" and we should look at
> (a trimmed please!! not 81 ParamByName assignments!! version of) your code
> and the dfm text for your DSQL object.


DSQL DFM code, transaction is being assigned before the iteration
through the xml.

TIB_DSQL *sSQL;

sSQL = new TIB_DSQL(this);
sSQL->Name = "sSQL";
sSQL->DatabaseName = "mbclient.gdb";
sSQL->IB_Connection = newConn;


Code:

if(UploaderTest->sSQL->Prepared)
{
UploaderTest->sSQL->Unprepare();
}
UploaderTest->sSQL->SQL->Clear();
UploaderTest->sSQL->SQL->Add(sqlStr); // Generated SQL statement
try
{
UploaderTest->sSQL->Prepare(); // Memory Jump Here
}
...

The sqlStr is generated by selecting the RDB$FIELD_NAME from
RDB$RELATION_FIELDS and generating the relavent insert or update code
depending on the parameters in the xml.

For the majority of the inserts and updates there is only a tiny jump in
the memory usage of the client when the dsql is prepared, it is only the
large records coming down that cause the huge jump in memory.

Thanks for your help Helen.

-Ryan


[Non-text portions of this message have been removed]