Subject | DB-file size exploding on working with blobs |
---|---|
Author | superkatchina |
Post date | 2005-01-17T13:26:29Z |
Hallo,
now I habe another problem. When I store data into a blob-field and
then I update it with new data again and again, the filesize of the DB
is exploding suddenly.
The data-size for importing data was never as big, and at
exploding-time it was very small. (But I have restarted the DB) It
seems so, that the DB has a cache, which was written back to Disk.
Now I have a filesize > 60MB what can I do, get a smaller file, or to
avoid exploding.
Calling gfix -sweep helped nothing.
Making a backup and restore is producing a small optimized DB-file again.
The problem is, that I will have very many blob-datasets with a lot of
working on it. And I have fear, that the DB-file will be to lage.
Thank you,
Werner Hofmann
now I habe another problem. When I store data into a blob-field and
then I update it with new data again and again, the filesize of the DB
is exploding suddenly.
The data-size for importing data was never as big, and at
exploding-time it was very small. (But I have restarted the DB) It
seems so, that the DB has a cache, which was written back to Disk.
Now I have a filesize > 60MB what can I do, get a smaller file, or to
avoid exploding.
Calling gfix -sweep helped nothing.
Making a backup and restore is producing a small optimized DB-file again.
The problem is, that I will have very many blob-datasets with a lot of
working on it. And I have fear, that the DB-file will be to lage.
Thank you,
Werner Hofmann