Subject Re: Performance when deleting a lot of records, again
Author cosamektoo
Hi
I've assumed your answer is for my question because of the context,
so thanks.
I've added the file gstat.txt to FILES.I see that t tables have large
max_versions value :
One 'DEVICE_FREE_SPACE' - 1455 and STATUS_KEEPER : 103420.(This table
contains only one record all the time with un integer value).
I've checked the transactions on those tables and they all commit
immediately.
Any Ideas how to proceed ?
Thanks
-gili


--- In firebird-support@yahoogroups.com, "Ann W. Harrison"
<aharrison@i...> wrote:
> Christian,
>
> As has already been said, having a database grow from 5Mb to
320Mb
> and shrink back after a backup/restore is an indication that
something
> is going wrong. For some reason, garbage collection is not
happening.
> Normally the reason is that there is a long-lived snapshot mode
> transaction that could see early versions of records. One way to
find
> out is to look at the output of a gstat - I use the command line
> version. It must be run on the server, I think.
>
> Start with gstat -h to get header information.
>
> Check the difference between the next transaction and the oldest
> snapshot - they should be reasonably close. If they're a couple
hundred
> thousand transactions apart - or even tens of thousands apart, you
need
> to find out where the long-lived transaction is started and stop it
or
> change it to a read-only read-committed transaction.
>
> For more information, run gstat -r -a >gstat.txt That will
tell you
> how many back versions of records are being retained and how
selective
> your indexes are.
>
> If you see that many back versions (max versions:) are being
> retained, then you can be quite sure that the problem is holding a
> snapshot transaction open for hours.
>
> Indexes with lots of duplicates on a single value (i.e. with a
value
> of max dup: greater than 20,000) have a significant impact on the
cost
> of garbage collection. Firebird 2 corrects the problem. Until
then,
> make the index more selective by making it compound. If, for
example,
> you have a database of students and you have a field for grade
which
> allows only these
values: 'freshman', 'sophomore', 'junior', 'senior',
> rather than putting an index on that field alone, make an index on
> grade, student-id. It's perfectly useful for retrieving grade
> information and doesn't slow down garbage collection.
>
> If you are deleting all the records from a table, disable the
indexes
> before you start, be sure that all concurrent transactions have
stopped
> - yes that's hard - count the table, then reactivate the indexes.
> Dropping the table would be even more effective, though it would
cause
> problems with dependencies and probably won't work in a multi-user
> environment.
>
> Regards,
>
>
> Ann