Subject Database file disk fragmentation
Author Alexander Tabakov
Hi all,

We've been using Firebird as a DB server for quite a while and one of
our database files is now well over 5GB. So, the point is that the file gets
fragmented very heavily which in turn leads to performance issues.

One way of resolving this issue is to create a database with a single
table which will be heavily populated until it reaches the size you
need for example 20GB and drop the table afterwards. I read about this
technique somewhere.

May someone comment on it. For example should I use BLOB fields in the
table or what? How can I predict the number of inserts I will need to
pupulate the table to a certain limit? Can I achieve the same result
in another way?

Thanks in advance.

P.S. I understand that this question is more appropriate for the
[firebirs-enterprise] list, but since there is almost no activity
there I prefer to put it here :).