Subject | Re: [IBO] [] Execution of prepared statements consumes a lot of memory |
---|---|
Author | Christian Gütter |
Post date | 2004-12-10T14:30:36Z |
Hi Helen,
thanks for your comprehensive answer.
[snipped informative stuff about inserts and blob inserts]
coincidence. I have debugged my program and when walking step by step
through my code, I noticed that the every time a cursor was executed
memory usage grew. Memory usage also grew when the blobs were inserted,
but this was not significant in terms of size.
But in each iteration of the loop, the cursors suck several hundred KB
of RAM in the server process.
You missed one important detail: I said that the cursors' memory usage
increases significantly when there is a high number of records in the
table they run their select on - when the result set is the same.
Here is a small example:
A select loop selecting 1000 records out of a table of 1000 records
consumes 150 MB of RAM. In comparision to that, selecting the same 1000 records out of a table
of 10000 records consumes 1,3 GB of RAM. The result of the whole transaction
is the same for both cases, but memory usage increases with the number of
'uninteresting' records in the source table.
must be done within one transaction. I agree that memory is cheap, but
I also think that applications should keep system requirements low.
My application in general would perform well for 20 concurrent users
on a server with 256 or 384 MB of RAM, if there wasn't this booking
stuff which is executed five times a week and which can suck more than 1 GB
of RAM. I was able to cut down the memory usage by creating a
'permanent temporary table' to which all 'interesting' records are
copied before the loop we are talking about starts. If I had not done
this, the loop would have sucked 4 GB of RAM. (To be honest, I
implemented this temporary table stuff because one day the loop sucked
all the server memory and FB stopped responding. Before that, I did
not even not what all those temptable-guys using M$SQLServer were
talking about ;-)
the loop. Of course you could not see this, because I did not provide
the masterlinks.
drain. The blobs are compressed and quite small. As I said,
memory usage increases significantly when the cursors are executed.
within one transaction.
size is 8 KB. AFAIU the page cache should only consume 16 MB then?
With kind regards,
Christian
thanks for your comprehensive answer.
[snipped informative stuff about inserts and blob inserts]
> Your deduction that the memory growth is somehow happening as a result ofI do not think that the memory growth when opening the cursors is a
> opening the cursors seems coincidental: memory growth certainly does occur
> once in each iteration of the loop. Of course, at the point where you
> inspect memory usage, the next iteration of the loop is under way. Since
> you last checked, the system has grabbed one or more pages of cache for the
> latest blob and, if needed, one or more for the row data, and has updated
> (increased) both the TSB and the insert cache, all for the previous iteration.
coincidence. I have debugged my program and when walking step by step
through my code, I noticed that the every time a cursor was executed
memory usage grew. Memory usage also grew when the blobs were inserted,
but this was not significant in terms of size.
But in each iteration of the loop, the cursors suck several hundred KB
of RAM in the server process.
You missed one important detail: I said that the cursors' memory usage
increases significantly when there is a high number of records in the
table they run their select on - when the result set is the same.
Here is a small example:
A select loop selecting 1000 records out of a table of 1000 records
consumes 150 MB of RAM. In comparision to that, selecting the same 1000 records out of a table
of 10000 records consumes 1,3 GB of RAM. The result of the whole transaction
is the same for both cases, but memory usage increases with the number of
'uninteresting' records in the source table.
> If the application is being run on the server, you might find that FR needsI will try that when I have the time.
> to do some kind of buffering to compensate for not having a buffered
> dataset, and isn't cleaning up properly. This is nothing more than
> conjecture, though. It might be worth seeing whether using an IB_Query for
> the detail sets would avoid that, if the app is server-based.
> It depends on whether high memory usage is more acceptable to you than theThis operation cannot be performed cyclically. The whole booking stuff
> requirement to perform this operation cyclically. It's a pragmatic
> decision...time is costly, memory is cheap.
must be done within one transaction. I agree that memory is cheap, but
I also think that applications should keep system requirements low.
My application in general would perform well for 20 concurrent users
on a server with 256 or 384 MB of RAM, if there wasn't this booking
stuff which is executed five times a week and which can suck more than 1 GB
of RAM. I was able to cut down the memory usage by creating a
'permanent temporary table' to which all 'interesting' records are
copied before the loop we are talking about starts. If I had not done
this, the loop would have sucked 4 GB of RAM. (To be honest, I
implemented this temporary table stuff because one day the loop sucked
all the server memory and FB stopped responding. Before that, I did
not even not what all those temptable-guys using M$SQLServer were
talking about ;-)
> The logic looks reasonable to me - except for cBrokerPos2, that is openedcCSBrokerPos2 gets its parameter values via MasterLinks. It must be in
> in each iteration of the loop yet never has any parameter values applied to
> it. Presumably, it surfaces some static data for the report, so calling
> its First() method once, before the loop starts, should suffice.
> It's eating a bit of server resource unnecessarily at each iteration.
the loop. Of course you could not see this, because I did not provide
the masterlinks.
> The main memory drain I see is that each iteration of the loop is passing aI must disagree. It is the cursors which cause most of the memory
> (possibly quite large) blob to an INSERT statement.
drain. The blobs are compressed and quite small. As I said,
memory usage increases significantly when the cursors are executed.
> Obviously, if you could break the process up into a number of smallerThis is not possible, as the whole booking stuff must be performed
> transactions, and hard-commit each one, the overall memory load will be
> reduced. Ending the transaction will "wash out" the TSB and also reset the
> insert cache.
within one transaction.
> It won't clear the page cache, though. Those pages will stay in cacheAs the default page cache size per database is 2048 pages, and my page
> until BOTH a) they not needed by any transaction AND b) their space is
> needed because the page cache is full. If you don't have a special reason
> to need hundreds of megabytes of cache, you might gain something by
> reducing the size of the cache to lower the ceiling and allow it to get
> full sooner.
size is 8 KB. AFAIU the page cache should only consume 16 MB then?
With kind regards,
Christian