Subject | Re: Large Db Files, Low Data, Duplicate records from 1 Insert |
---|---|
Author | Svein Erling Tysvær |
Post date | 2005-10-11T08:54:41Z |
Exactly how do you get duplicate records? How do you insert (e.g.
using a TIB_DSQL in IBObjects is very different from using a TTable)
and do you mean 'identical values created by a generator' or something
else when you say 'duplicate record'? Any other things that could be
of importance (triggers, UDFs etc.)? I've never heard about excessive
use causing errors in Firebird (other than bottlenecks and corrupt
databases when people run with forced writes off and turn off their
server, of course), but I cannot guarantee this not to be because of
my flaky memory.
Set
using a TIB_DSQL in IBObjects is very different from using a TTable)
and do you mean 'identical values created by a generator' or something
else when you say 'duplicate record'? Any other things that could be
of importance (triggers, UDFs etc.)? I've never heard about excessive
use causing errors in Firebird (other than bottlenecks and corrupt
databases when people run with forced writes off and turn off their
server, of course), but I cannot guarantee this not to be because of
my flaky memory.
Set
--- In firebird-support@yahoogroups.com, "Photios Loupis (4T)" wrote:
> We have 2 VERY active databases where records that are added to the
> database have a very short active life-span. As a result the
> architecture we have adopted is that these records stay in the
> active database only for as long as they need to and then they are
> moved to another database for user reporting and mining purposes.
> Periodically we get large spikes of inserts and this causes the
> database file to grow in size, but the actual data that is in the
> database is minimal once the records have been moved out. eg 400mb
> file is reduces to 13mb after a back and restore.
> The issue we are periodically encoutering is that when the file
> reaches a large size it seems that periodically we get 2 identical
> record for a single insert. The longer the database is left the
> worse this gets and only a backup and restore will resolve this.
> This database is used extensively 24x7 and we sweep the database
> daily to try and curb this issue but recently we have seen it happen
> again and again a backup and restore was the only solution.
> Has anyone encountered anything similar or does anyone have any
> insights into this problem? Alternatively, I am keen to hear about
> backup strategies that will enable use to "cleverly" backup and
> restore databases in a live environment with minimum impact on the
> applications connected.