Subject Re: [firebird-support] server memory usage on database restore
Author Helen Borrie
At 11:14 AM 2/06/2004 +0200, you wrote:
>Helen
>Thanks for your reply
>
> > Re your database cache, 200000 * 8Kb is a pretty high allocation for such
>a
> > small amount of RAM. How did you arrive at such a huge cache?
>Seems to be the fastest. I have tried combinations of 10000 pages with 4, 8
>and 16k pages, and 20000 with 4 and 8k pages, and this seems to get the job
>done. Maybe I need to rethink this?

I think so. The default of 2048 pages isn't some kind of miniminiminimum,
it's a general all-round satisfactory setting for low-calorie networks like
yours. If you are doing mainly inserts, higher settings are simply a total
waste of resource - it's literally memory you are stealing from the
system. You've got a database engine in there, somewhere, and database
engines just loooooove RAM. They get quite miffy when you starve them.

>I have a maximum of 8 clients connecting
>at a time, and they pump about 10000 rows into about 15 tables every day. I
>dont know how else to come up with a cache size aside from a guess-timate.

Set it back to the default and stop thinking it's important. It matters a
little bit when lots of users are constantly accessing the same data for
searches and updates. The database cache really isn't a key player in
performance - unless of course you sabotage performance by raising it
through the roof.


>One other thing,
>
>I have turned forced writes off for a week to see if there was a performance
>increase, and there was, about 4x increase during bulk inserts.

Yes, writes to disk are *such* a pain, aren't they? Computers go much
faster when they don't have to write anything.

>Problem is,
>when I shut the server app down, it "hung" for about 3 hours before actually
>exiting.

Three hours' worth of writes is a whole lot of unwritten work hanging around.

>ransactions are created when a client logs on and destroyed when
>their session is finished

So what volume of work and time are we talking about here? One single
transaction with no commits for ...how long? 150,000 inserts?

>so I assume this had something to do with a cache
>being flushed - but I thought FB did that on a regular interval, or after a
>fixed amount of activity.

Cache, cache, cache. Let's be clear. Work for asynchronous writes isn't
stored in the DB cache, it's stored by the operating system in its write
cache. Before v.1.5, Windows did the Windows thing: if FW was off, it
write-cached EVERYTHING until the *server* was shut down. Fb 1.5
introduced mechanisms to kick Windows every so often and make it write
*committed* stuff to disk. But you're not committing, so those new records
just hang there in the write cache, at the mercy of Windows, until someone
decides to commit them (or till lightning strikes the nearby power
substation....)

>If there had been a power failure I am sure lots
>of data would have been lost.

All of it, seemingly. I'm glad to hear you gave up on tightrope-walking.

>The firebird.conf settings are default at the moment.

Except DefaultDBCachePages, right? Look there to find your memory wastage.

And redo your import application so it partitions the jobs into
transactions of about 3-5000 operations, depending on row sizes.

/heLen