Subject | Re: [IBO] TIB_DSQL |
---|---|
Author | Helen Borrie |
Post date | 2005-01-17T14:22Z |
At 01:28 PM 17/01/2005 +1100, you wrote:
allocated resources to remain allocated until you hard-commit (or
rollback). That is *not* desirable. You're depending on IBO's Disconnect
procedure to do a hard commit, since that's the only time a hard commit
will ever happen unless you make it happen elsewhere.
operation is a sign that no (or inadequate) cleanup is occurring on the
server. CommitRetaining again is the usual culprit here (or, rather, the
programmer who uses it willy-nilly without ever performing hard commits).
In the case of a single transaction that is walloping through an
uncontrolled number of inserts, you'll also get bloated memory consumption,
as the server maintains a rollback log for inserts, that doesn't get reset
until Commit. (Eventually the rollback log gets disabled when it tops out,
but it still won't get released until you hard-commit).
I also get an itchy feeling that you're rolling repeatedly through the xml
document and picking up one element at a time...inserting on the first pass
and updating on the others. That's piggy-backing record versions on record
versions - <shudder>one insert and then 80 updates....<\shudder>
will get rid of the statement resources and immediately recreate them as
soon as you recreate and prepare the object, so it's a transitory saving
that just uses more resources on the client. It won't do anything to clean
up the transaction resources that are building up on the server from soft
commits.
And, even with a cleaner approach to the transactions, with such a large
structure to insert into, I'd suggest that about 2000 iterations (requests)
per transaction would be about tops..possibly more if your user has pretty
well exclusive use of the server...don't be frightened to make it less..but
don't let it be unlimited. An added (and possibly necessary) benefit,
since you are doing both updates and inserts in the same transaction, is
that it will also at least cut down the potential number of multiple hits
on the same row, which is another source of engorgement of the transaction
state bitmap.
Use an explicit transaction (which disables Autocommit and thus disables
commitretaining); only prepare your statement once; and put your operation
inside an outer loop that does a hard commit after every 2000 requests.
Include a try..except block at the end of the commit cycle to ensure that
each new cycle begins with an unfettered transaction that is ready to start.
Oh, and don't be tempted to use Pessimistic locking for a job like this...
Now, all that said, if you *are* taking care to explicitly start and commit
transactions, then this problem is "something else" and we should look at
(a trimmed please!! not 81 ParamByName assignments!! version of) your code
and the dfm text for your DSQL object.
Also worth looking at the database cache settings to ascertain whether some
of this memory growth isn't just too big of a cache for the conditions. A
10,000-page cache in a database with 16K page size sounds like a nice way
to use up memory that would otherwise be lazing about...but it doesn't win
much if your application finds ways to gobble up what's left and still
demand more.
HB
>Hi,That would figure. 81 fields is a lot of fields. It's OK, as expected.
>
>We are using the new IB 4.5 components and have come across this with
>our project.
>
>What it does is iterate over an xml document and insert or update the
>database with the data received.
>
>Some of the data that comes down is to be inserted into a table with 81
>columns (they are all parameters to the sql statement), when Prepare is
>called on the DSQL the memory usage in Windows jumps by about 25 - 30 Mb
>and is only released termination of the application.That would also figure, if you are using only CommitRetaining. It causes
allocated resources to remain allocated until you hard-commit (or
rollback). That is *not* desirable. You're depending on IBO's Disconnect
procedure to do a hard commit, since that's the only time a hard commit
will ever happen unless you make it happen elsewhere.
>This is a majorHaving memory usage constantly rise over the course of an iterative
>problem as the xml document can sometimes contain as many as 9000
>elements, I have processed up to about 90 elements until the systems
>became totally unusable, using ~1.5Gb of memory.
operation is a sign that no (or inadequate) cleanup is occurring on the
server. CommitRetaining again is the usual culprit here (or, rather, the
programmer who uses it willy-nilly without ever performing hard commits).
In the case of a single transaction that is walloping through an
uncontrolled number of inserts, you'll also get bloated memory consumption,
as the server maintains a rollback log for inserts, that doesn't get reset
until Commit. (Eventually the rollback log gets disabled when it tops out,
but it still won't get released until you hard-commit).
I also get an itchy feeling that you're rolling repeatedly through the xml
document and picking up one element at a time...inserting on the first pass
and updating on the others. That's piggy-backing record versions on record
versions - <shudder>one insert and then 80 updates....<\shudder>
>Has anyone seen this before, or is there something obvious that I'mIt's a problem of transaction control. Destroying the statement object
>missing here, I have tried calling Free on the component, deleting and
>re-creating it each iteration and unpreparing it, but to no avail.
will get rid of the statement resources and immediately recreate them as
soon as you recreate and prepare the object, so it's a transitory saving
that just uses more resources on the client. It won't do anything to clean
up the transaction resources that are building up on the server from soft
commits.
And, even with a cleaner approach to the transactions, with such a large
structure to insert into, I'd suggest that about 2000 iterations (requests)
per transaction would be about tops..possibly more if your user has pretty
well exclusive use of the server...don't be frightened to make it less..but
don't let it be unlimited. An added (and possibly necessary) benefit,
since you are doing both updates and inserts in the same transaction, is
that it will also at least cut down the potential number of multiple hits
on the same row, which is another source of engorgement of the transaction
state bitmap.
Use an explicit transaction (which disables Autocommit and thus disables
commitretaining); only prepare your statement once; and put your operation
inside an outer loop that does a hard commit after every 2000 requests.
Include a try..except block at the end of the commit cycle to ensure that
each new cycle begins with an unfettered transaction that is ready to start.
Oh, and don't be tempted to use Pessimistic locking for a job like this...
Now, all that said, if you *are* taking care to explicitly start and commit
transactions, then this problem is "something else" and we should look at
(a trimmed please!! not 81 ParamByName assignments!! version of) your code
and the dfm text for your DSQL object.
Also worth looking at the database cache settings to ascertain whether some
of this memory growth isn't just too big of a cache for the conditions. A
10,000-page cache in a database with 16K page size sounds like a nice way
to use up memory that would otherwise be lazing about...but it doesn't win
much if your application finds ways to gobble up what's left and still
demand more.
HB