Subject | Re: [firebird-support] server memory usage on database restore |
---|---|
Author | Helen Borrie |
Post date | 2004-06-02T12:30:11Z |
At 01:27 PM 2/06/2004 +0200, you wrote:
queries are hitting the same sectors of tables and indexes then a bigger
cache will help to speed them. But not a 20000 * 8Kb cache on a 750 Mb
server. Try 8192 pages, and not more than about 10,000.
Just don't assume that, because the queries ran faster or slower after a
shutdown, it had anything to do with the cache size. It might; or it
might be something quite different causing the slowdown or speedup, like
garbage collection, unbalanced indexes, etc. (or the lack of them..)
Just get past the belief that the DBCache is the crucial memory user. It
can be helpful when you're re-accessing the same rows of data repeatedly,
but other things could be much more useful for making queries go
faster. For example, you describe the plans as "sparkly" but are they
using slow indexes? You're doing lots of inserts but do you ever rebuild
indexes? Plans are showing indexes being used but are they really useful
indexes or would you be better off without them? Do you recompute indexes
regularly? Do you watch the statistics on the indexes in these big tables?
grouped or ordered, you might like to try increasing one or both if memory
usage seems lower than you expect.
inserts AND running queries with six-way joins. Firebird wasn't designed
to run on 64 Mb machines, early versions of InterBase were. That doesn't
mean it's *restricted* to a 64Mb memory model! Still, there's a legacy of
tight memory usage that you can take advantage of, especially around indexes.
of the cache become relevant to more of the queries. Rarely-used pages
will be knocked off and oft-used ones will occupy a greater proportion of
the space available.
Page size might well be more crucial to your setup than cache size. A 16
Kb page size might help some of those queries if it can accommodate more
rows on a data page and more nodes on an index page.
it also forms bitmaps from data streams where an index would be too slow
and, as you note, several varieties of etc etc etc etc etc
from the ibphoenix site, about the optimizer. That will tell you quite a
few surprising things about the ways memory is used.
DefaultDBCachePages value only applies to databases that don't have a cache
size written in their headers.
Pavel come in and join the fray.
/heLen
>Helen, thanks againOK, that does make quite a difference to your cache needs. If these stats
>
> > You've got a database engine in there, somewhere, and database
> > engines just loooooove RAM. They get quite miffy when you starve them.
>Ok, so heres the situation. The database is getting on 3GB now. Some main
>tables have over 4 million rows. Besides all the inserts happening, the big
>bosses want to run some stats, and these queries typically join about 6 of
>these large tables with indexes on all the main linked fields. Plans come
>back all sparkly, so the problem is not there. If I drop the cache pages to
>2048, these queries run for minutes instead of seconds.
queries are hitting the same sectors of tables and indexes then a bigger
cache will help to speed them. But not a 20000 * 8Kb cache on a 750 Mb
server. Try 8192 pages, and not more than about 10,000.
Just don't assume that, because the queries ran faster or slower after a
shutdown, it had anything to do with the cache size. It might; or it
might be something quite different causing the slowdown or speedup, like
garbage collection, unbalanced indexes, etc. (or the lack of them..)
>Something as stupidMore RAM, usually. And good index housekeeping.
>as changing the pages size back to 8k from 4k made 30 seconds difference. If
>I am on the wrong track, then what do I throw at the server to get it to be
>more effecient.
Just get past the belief that the DBCache is the crucial memory user. It
can be helpful when you're re-accessing the same rows of data repeatedly,
but other things could be much more useful for making queries go
faster. For example, you describe the plans as "sparkly" but are they
using slow indexes? You're doing lots of inserts but do you ever rebuild
indexes? Plans are showing indexes being used but are they really useful
indexes or would you be better off without them? Do you recompute indexes
regularly? Do you watch the statistics on the indexes in these big tables?
>Do I play with the SortMemBlockSize or SortMemUpperLimit andFb 1.5 will use RAM for sorts if it is available. If your queries are
>give the server some breathing space.
grouped or ordered, you might like to try increasing one or both if memory
usage seems lower than you expect.
>I know that firebird was designed to run on a minimum spec machine, in theA lot of that 750 Mb is being used by the server if you are doing lots of
>days when 64mb of ram was a lot, but I want it to use as much of the 750mb
>as it need to get the job done fast.
inserts AND running queries with six-way joins. Firebird wasn't designed
to run on 64 Mb machines, early versions of InterBase were. That doesn't
mean it's *restricted* to a 64Mb memory model! Still, there's a legacy of
tight memory usage that you can take advantage of, especially around indexes.
>I have run complex queries that took minutes to return, and when I checked theThe benefit of the cache will increase as the day goes by, as the contents
>ram usage in task manager FB was only using 32MB of ram. If I give it more
>cache, it used more ram, and I get all happy and think things are going
>faster.
of the cache become relevant to more of the queries. Rarely-used pages
will be knocked off and oft-used ones will occupy a greater proportion of
the space available.
Page size might well be more crucial to your setup than cache size. A 16
Kb page size might help some of those queries if it can accommodate more
rows on a data page and more nodes on an index page.
>Surely, if I ask interbase to run a query, it loads an index intoKind of. It also builds bitmaps of indexes and puts those in memory. But
>memeory and does a sort, it then loads the next index into memory and does a
>merge etc etc.
it also forms bitmaps from data streams where an index would be too slow
and, as you note, several varieties of etc etc etc etc etc
>Is this done in cache pages or in some other memory pool I have missed -Not in an email, we'd be here for days. Pick up Pavel Cisar's white paper
>one that can hopefully be enlarged.
from the ibphoenix site, about the optimizer. That will tell you quite a
few surprising things about the ways memory is used.
> > So what volume of work and time are we talking about here? One singleDepends. Check for sure by using SHOW DATABASE on each open db. The
> > transaction with no commits for ...how long? 150,000 inserts?
> But you're not committing, so those new records
> > just hang there in the write cache, at the mercy of Windows, until someone
> > decides to commit them (or till lightning strikes the nearby power
> > substation....)
>The transaction is closed and the session committed when every client
>disconnects, so, I'm not sure why this amount of unwritten data is hanging
>around. I will need to go and look at my code. In the meantime, FW=0
>
>Except DefaultDBCachePages, right? Look there to find your memory wastage.
>This is commented out, so I assume it is still at the default 2048.
DefaultDBCachePages value only applies to databases that don't have a cache
size written in their headers.
>Thanks for your patience and excellent answersNo more today. I've run out of time ... you might luck in, and have Ann or
Pavel come in and join the fray.
/heLen