Subject Re: [IB-Architect] Memory allocation in Super Server
Author Charlie Caro

So the exact same application/InterBase configuration running under
Windows NT 4.0 (Service Pack x) was not accumulating memory in the IB

BTW, InterBase and W2K isn't a certified configuration yet. I wouldn't
advise running production applications there. From a technical
standpoint, if Microsoft has done W2K right, it shouldn't matter.


Dan Palley wrote:
> Ann:
> Thanks for the detailed response. We've seen a big jump in memory usage on
> our production server since moving to Windows 2000. The database is not
> big, about 500 MB, but we have 50-60 users hitting it on a regular basis.
> After a week or so, the memory used by Interbase is about 900 MB; if we shut
> the server down and restart, the memory is freed.
> Others have suggested that long running transactions can cause memory to be
> held on the server, but the memory is not released when all connections are
> closed. Our cache settings are set to the default.
> Other users have reported similar problems on Windows 2000, so I suspect
> something changed in the memory allocation scheme in Win2k that's having an
> adverse affect on Interbase.
> Dan
> -----Original Message-----
> From: Ann Harrison [mailto:harrison@...]
> Sent: Thursday, October 19, 2000 10:17 AM
> To:
> Subject: Re: [IB-Architect] Memory allocation in Super Server
> At 09:36 AM 10/19/2000 -0700, Dan Palley wrote:
> >I found this item in the Interbase knowledgebase and I'm somewhat
> concerened
> >about it's implications:
> >
> >"When InterBase allocates memory to perform operations, it does not ever
> >return this memory to the operating system. This memory is returned to
> >InterBase's internal memory free list, but it is never returned to the
> >operating system for other processes to use. Thus the memory is
> >not freed until the InterBase service is stopped or the machine is
> >rebooted."
> >
> >What type of operations in Interbase would cause this to happen? Was this
> >addressed at all in Interbase 6?
> InterBase manages a pool (actually several pools and a lagoon) of
> memory from which it allocates various blocks that describe metadata,
> compilation elements, execution trees, etc. It's memory management
> is quite good - it won't, for example, leave little dribs and drabs
> around, too small to use. When a request goes from compilation to
> execution, the compilation specific memory is released to the pool.
> When it completes and the prepared query is released, all of the
> memory associated with the query is returned to the pool.
> Only if there is not enough memory available in the pool will memory
> be allocated from the operating system. Generally the amount of
> memory used rises in proportion with the number of users. So, if
> your server has 2 users on weekends (me and you, no doubt) and 50
> during the week, it will hold enough memory to serve all fifty, but
> will go from 2 to back 50 without allocating more.
> In general, you will find that InterBase doesn't use a lot of memory,
> unless you request a ridiculous number of pages for the cache. One
> of the major tasks in moving to the SuperServer architecture was
> locating and stopping all memory leaks. This is a non-problem (I
> think).
> Regards,
> Ann
> To unsubscribe from this group, send an email to:
> To unsubscribe from this group, send an email to: