Subject Re: Memory tune-up on Firebird 1.5 Embedded
Author val_melamed
Hello and thank you all for your replies. I would just like to make a
few points in my original post clearer:

It is a single user, desktop application. 10-15,000 records in the DB
can be considered a "fairly big" amount of data. The DB file with
15,000 records built for performance testing is about 13+MB. That's
why I put the size of the DB on a "scary big" side: "typically 20-
50MB FDB file". So: no, we are not talking about millions of records
but about tens of thousands at the most. A few more numbers: the
above mentioned 15,000 record db loaded in memory results in about
20MB added to the application's memory working set but when I played
with firebird's parameters I was able to cut 10MB.

I generally disagree with the statement "Firebird is a client/server
database and you should work with datasets only containing records
and fields that you're currently working on". I would put the
*embedded* Firebird in the same class, as far as how it gets used,
with Access, FoxPro, etc. desktop database engines. The point about
the applications built on top of these DB systems is that they are
single user apps and usually the development teams made a concise
decision to use a desktop DB just because it is a very reliable and
well-thought-out data store that has the ability to capture and
persist some or all data structures and relations from the problem
domain. Usually features like multi-user access, advanced security
options, ability to perform heavy management tasks are not important.
Therefore very often some or most of these features are dropped.
E.g. "Windows Embedded Server has the same features as Superserver,
except multi-user support and password protection". Certainly a big
discussion can take place here but the fact is that for a desktop
application with the outlined data size requirements loading a lot of
the data in memory is not unheard-of but rather a viable and relevant
approach which allows for very rich and responsive user interface.
And the nature of our application is very much about that: very rich
and responsive user interface.

These days memory is cheap and desktop data base engines are free. We
know very well that statements like these can be very slippery
slopes. We do watch our resource utilization very closely, in fact we
are very careful and selective in what exactly gets loaded, but it
may be more than what you would typically do on a server application
when you only load the dataset with what "you're currently working
on". As a matter of fact our development team read all your replies
and had a long, pragmatic discussion about it. In the end we all
agreed that, considering the requirements, we stand by our design
decision, even as we evolve it to be mindful of not loading data
unnecessarily.

After all this let me rephrase the question: with three lines in the
config file, we were able to slash the data memory footprint twice.
Can we go even lower with these numbers?

DefaultDbCachePages = 256
SortMemBlockSize = 524288
SortMemUpperLimit = 4194304

Are there any other resources that we can disable or limit without
hurting the DB performance and functionality, given our current usage
pattern in which the DB is being used much more as a reliable
persistence layer than as an ongoing SELECT-driven data source?

On a different but important note: many of our customers use our
application from a flash drive (U3 compatible): the database (fdb)
remains on the flash drive, while the executable is temporarily
copied to the hard drive and executed on the host machine. Are there
any parameters that you would suggest tweaking in this scenario? The
problems there are the ability of the customer to basically yank out
the drive without a proper eject, which means that you cannot assume
write operations will terminate as expected. That would be a bit
similar to the file storage going down or going offline on a server.

Thank you again
Val Melamed