Subject Re: [firebird-support] Re: My Firebird fails on Large database size, only way to fix is by reboot
Author Helen Borrie
At 10:19 AM 19/11/2009, you wrote:

>Please pardon my ignorance Eduardo, I never do any maintenance on the DB.

That is the most likely cause of your current problems. However, you still have not reported what messages you have received at the times when you could not connect to your database. Because of that, nobody can really advise you about what to do.

>Could you knidly advise me how to do a sweep

You need the gfix tools. Assuming Windows, you will find gfix.exe in the \bin directory of the Firebird installation.

Docs here:
(or you can go to the Doc Index and download the gfix manual as a PDF)

Do a sweep only at this stage and perhaps a validation. Don't try to do a mend unless you see error messages indicating damage.

>I think I can do the backup/restore and know how to configure pagesize through firebird.conf

You cannot configure page size through firebird.conf. You can do it as a parameter of restore. When you restore, do NOT use the -replace_database option. Restore to a database file of a different name, connect to it with your favourite query tool and do some selects. If you are happy with it, then

1. Put the old database offline (disconnect all connections).
2. Rename the old database.
3. Move the restored database to the right location for production.
4. Rename the restored database to the correct name for production.

>If you see anything else odd in firebird.conf please advise me so I can adjust the parameters.

Really, your firebird.conf has all the defaults and nothing there is extraordinary. While it might be a good idea to increase the page size from 4KB to 8KB, I don't see the page size, nor the configured db cache size, nor the MaxFileSystemCache. Yesterday you ignored most of what I spent quite a lot of time writing...and so far you are not telling us anything useful about your problem.

Your gstat -h stats told us that the system had recently done more than one garbage collection and that the last effective GC probably cleared out garbage from about 10,000 transactions. That's good. If there were a lot records represented in that garbage, then your system has probably been trying to clear that out for quite some time and your application code will continue to accumulate more mismanaged record versions. Keep an eye on that garbage. You can get an estimate of how much garbage is hanging around by calculating the gap between the Oldest Snapshot and the Oldest Transaction.

If you really don't understand how to do good transaction management in your client code then you *will* need to fall back on housekeeping to deal with the resulting garbage. I would suggest getting all connections off-line once a week and doing a sweep. Then, once a month (at worst), you should do a backup. As long as you are doing the weekly sweep, you could make your backups with the -g switch (which will suppress the gbak -b normal behaviour of performing garbage collection).

I suggest also that you configure GCPolicy to do 'background' garbage collection. Currently you have it at the default 'mixed', which means that every select on a table is kicking off GC on that table. If it happens to be a table that gets frequent bulk deletes or updates, co-operative GC could cause quite a slowdown.

Don't forget to --
-- remove the # symbol from the parameter when you edit it
-- save firebird.conf and make sure you saved firebird.conf and not firebird.conf.txt
-- stop and restart the Firebird service after this reconfiguration.