Subject Re: [firebird-support] Server and DB page sizes and memory
Author Erik LaBianca
I too have been having problems with insufficient performance due to
lack of memory. I have a table of approximately 4.5 million records
which I am working with. The .fdb file ends up using around 1GB of disk
space, although adding a bunch of indexes push it up around 1.8GB.

Among other things I need to be able to 'walk' the database in sorted
order setting a 'sequence number'. I have a stored procedure to do this
which works great, but takes several hours. It can take anywhere from 1
to 6 to more than I was willing to wait, depending on platform and
optimization settings. The disk is under very heavy I/O the entire time.
I was able to improve performance a good bit but changing my procedure
to work on 'subsets' of the table instead of trying to sort the entire
table at once and then walking it, but it is still very slow.

Reducing the record count to around 1million and database size to around
250MB results in the unoptimized procedure running in about 7 minutes.

I'm using 8192 byte pages and 65535 buffers (aka I'm maxed out all
around). My machine has 4GB of ram in it, so I'd like to be able to use
that ram effectively, but since I can't increase the page cache any more
and firebird doesn't seem to benefit much (any?) from the filesystem
buffer cache I seem to be bottlenecked.

I checked out the 2.0 release notes but I didn't see anything indicating
that firebird's ability to use excess resources had improved, am I
missing something, or is there some other tuning I could do to make
better use of my machine?

Thanks

--erik



[Non-text portions of this message have been removed]