Subject Re: [firebird-support] server memory usage on database restore
Author Gary Benade
Helen, thanks again

> You've got a database engine in there, somewhere, and database
> engines just loooooove RAM. They get quite miffy when you starve them.
Ok, so heres the situation. The database is getting on 3GB now. Some main
tables have over 4 million rows. Besides all the inserts happening, the big
bosses want to run some stats, and these queries typically join about 6 of
these large tables with indexes on all the main linked fields. Plans come
back all sparkly, so the problem is not there. If I drop the cache pages to
2048, these queries run for minutes instead of seconds. Something as stupid
as changing the pages size back to 8k from 4k made 30 seconds difference. If
I am on the wrong track, then what do I throw at the server to get it to be
more effecient. Do I play with the SortMemBlockSize or SortMemUpperLimit and
give the server some breathing space. I know that firebird was designed to
run on a minimum spec machine, in the days when 64mb of ram was a lot, but I
want it to use as much of the 750mb as it need to get the job done fast. I
have run complex queries that took minutes to return, and when I checked the
ram usage in task manager FB was only using 32MB of ram. If I give it more
cache, it used more ram, and I get all happy and think things are going
faster. Surely, if I ask interbase to run a query, it loads an index into
memeory and does a sort, it then loads the next index into memory and does a
merge etc etc. Is this done in cache pages or in some other memory pool I
have missed - one that can hopefully be enlarged.

> So what volume of work and time are we talking about here? One single
> transaction with no commits for ...how long? 150,000 inserts?
But you're not committing, so those new records
> just hang there in the write cache, at the mercy of Windows, until someone
> decides to commit them (or till lightning strikes the nearby power
> substation....)
The transaction is closed and the session committed when every client
disconnects, so, I'm not sure why this amount of unwritten data is hanging
around. I will need to go and look at my code. In the meantime, FW=0

Except DefaultDBCachePages, right? Look there to find your memory wastage.
This is commented out, so I assume it is still at the default 2048

Thanks for your patience and excellent answers
Gary