Subject RE: [firebird-support] Re: Memory usage problems
Author Robert DiFalco
I may be wrong, but I am guessing that "gds__alloc: memory pool
corrupted" is Firebird's response to running out of memory. This
eventually results in your connection problem. This *can* be the result
of many prepared-statements (or connections) that are never closed.
However, I've also heard it being the result of using certain UDF
libraries. Are you using any of these, say FreeUDF? I found this, which
may or may not be relevant:


-----Original Message-----
From: William L. Thomson Jr. [mailto:support@...]
Sent: Saturday, September 20, 2003 1:09 PM
To: firebird-support
Subject: Re: [firebird-support] Re: Memory usage problems

On Sat, 2003-09-20 at 15:41, robert_difalco wrote:
> Are you using Java/JDBC with a driver other than JayBird?

No I am using Jaybird. Maybe not the latest version, but I believe the
last release version.

> I've seen
> this happen when using interclient. Basically, it seems to leak
> memory if you do not cycle out connections. Also, don't
> underestimate the memory you may be leaking by having transactions
> that are never committed or aborted.

I am not underestimating the memory usage, but seeking an explanation
for the errors in the log. I have already addressed the open transaction

If I have a bunch of open transactions errors like that in the log are
normal? It seems to me something is going wrong in Firebird's memory

If Firebird was consuming large amounts of memory and there were no
errors in the log, I would know the problem is me or my code.

While I know I am part to blame for the memory usage, I still want to
know if those errors are normal operation errors or a bug. Or something
I am doing very wrong to cause those errors?

I have been working with Firebird for some time and have yet to come
across errors like these.

I have had at times larger quantities of open transactions to a larger
database without the same problems? So I know open transactions waste
memory, but I have never seen the server run itself out of memory with
errors like that in the log.

I wonder if it's a hardware issue, maybe bad memory or something? The
machine the db is currently on is not the newest. We are still in the
initial stages and a new large server is not schedule till the next

William L. Thomson Jr.
Support Group
Obsidian-Studios, Inc.

Yahoo! Groups Sponsor

To unsubscribe from this group, send an email to:

Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service.