Subject Forced write, page size and buffer size
Author Aldo Caruso
Hello,

For reliability reasons, I decided to turn on forced writes on a
database running on Linux.

After that I noticed that some massive updates ( 100K records ) took
nearly 5 minutes, whereas with async writes it used to take 10 seconds.

One solution is, of course, disabling sync writes when doing massive
updates. Unfortunately not always massive updates are under database
admin control ( some end users actions can lead to massive updates,
indirectly, by means of triggers ).

Another aproach I tested was augmenting page size from its default
value ( 4 KB ) to its maximum allowed value ( 16 KB ). The speed was
notably enhaced ( 1 minute for the update + 10 seconds for the commit,
but sometimes 2 seconds for the update and 40 seconds for the commit).

It should be pointed out that 4 KB was fine, taking into account
record size ( max. 300 bytes ) and index max depth ( always < 3 ).

Going one step further, I augmented cached pages from its default (
2048 ) to 8192. Some small performance improvement was observed, but not
very significative. It shoud be noted also that, with a 16 KB page size,
the database has 5700 pages on disk, so there are enough cache pages to
hold the entire database.

Given this scenario my questions are the following:

1) Does it makes sense to activate forced writes on Linux ?
2) Is increasing the page size the right approach to improve performance
? Which are the drawbacks of setting page size to 16K ?
3) The same question stands for cache pages: is it useful ( or has
disadvantages ) to have so many cache pages as there are on disk pages (
provided there is enough RAM size ) ?

Thanks in advance for any clue.
Aldo