Subject Re: Large Db Files, Low Data, Duplicate records from 1 Insert
Author Photios Loupis (4T)
We are using FIBPlus and each insert is wrapped in it own transaction
There is a Primary Key on the table and attached to it is a
generator. It is not a duplication in the true sense as the Primary
Key has a unique value. It almost appears that the insert action
happens twice.
We are not using UDFs on the database and there are no INSERT
triggers on this table.
The database is running on a Windows server using SuperServer with
forced writes on.
The system runs fine even when we get large spikes of Inserts. This
seems to only happen after a while when there is a large amount
of "white space". I know this because when it starts happening and I
perform a restore from a recent backup the problem goes away.
*confused*
I realise that this is a strange occurrence and we have debugged this
database numerous times without being able replicate the problem. It
seems the best approach is to define a backup/restore strategy that
can be executed regularly. We will need to get our applications to
behave appropriately when a database is taken off-line by the
backup/restore task.

--- In firebird-support@yahoogroups.com, Svein Erling Tysvær
<svein.erling.tysvaer@k...> wrote:
>
> Exactly how do you get duplicate records? How do you insert (e.g.
> using a TIB_DSQL in IBObjects is very different from using a TTable)
> and do you mean 'identical values created by a generator' or
something
> else when you say 'duplicate record'? Any other things that could be
> of importance (triggers, UDFs etc.)? I've never heard about
excessive
> use causing errors in Firebird (other than bottlenecks and corrupt
> databases when people run with forced writes off and turn off their
> server, of course), but I cannot guarantee this not to be because of
> my flaky memory.
>
> Set
>
> --- In firebird-support@yahoogroups.com, "Photios Loupis (4T)"
wrote:
> > We have 2 VERY active databases where records that are added to
the
> > database have a very short active life-span. As a result the
> > architecture we have adopted is that these records stay in the
> > active database only for as long as they need to and then they
are
> > moved to another database for user reporting and mining purposes.
> > Periodically we get large spikes of inserts and this causes the
> > database file to grow in size, but the actual data that is in the
> > database is minimal once the records have been moved out. eg
400mb
> > file is reduces to 13mb after a back and restore.
> > The issue we are periodically encoutering is that when the file
> > reaches a large size it seems that periodically we get 2
identical
> > record for a single insert. The longer the database is left the
> > worse this gets and only a backup and restore will resolve this.
> > This database is used extensively 24x7 and we sweep the database
> > daily to try and curb this issue but recently we have seen it
happen
> > again and again a backup and restore was the only solution.
> > Has anyone encountered anything similar or does anyone have any
> > insights into this problem? Alternatively, I am keen to hear
about
> > backup strategies that will enable use to "cleverly" backup and
> > restore databases in a live environment with minimum impact on
the
> > applications connected.
>