Subject | Re: Performance when deleting a lot of records, again |
---|---|
Author | cosamektoo |
Post date | 2005-04-07T08:14:45Z |
Hi
I've assumed your answer is for my question because of the context,
so thanks.
I've added the file gstat.txt to FILES.I see that t tables have large
max_versions value :
One 'DEVICE_FREE_SPACE' - 1455 and STATUS_KEEPER : 103420.(This table
contains only one record all the time with un integer value).
I've checked the transactions on those tables and they all commit
immediately.
Any Ideas how to proceed ?
Thanks
-gili
--- In firebird-support@yahoogroups.com, "Ann W. Harrison"
<aharrison@i...> wrote:
I've assumed your answer is for my question because of the context,
so thanks.
I've added the file gstat.txt to FILES.I see that t tables have large
max_versions value :
One 'DEVICE_FREE_SPACE' - 1455 and STATUS_KEEPER : 103420.(This table
contains only one record all the time with un integer value).
I've checked the transactions on those tables and they all commit
immediately.
Any Ideas how to proceed ?
Thanks
-gili
--- In firebird-support@yahoogroups.com, "Ann W. Harrison"
<aharrison@i...> wrote:
> Christian,320Mb
>
> As has already been said, having a database grow from 5Mb to
> and shrink back after a backup/restore is an indication thatsomething
> is going wrong. For some reason, garbage collection is nothappening.
> Normally the reason is that there is a long-lived snapshot modefind
> transaction that could see early versions of records. One way to
> out is to look at the output of a gstat - I use the command linehundred
> version. It must be run on the server, I think.
>
> Start with gstat -h to get header information.
>
> Check the difference between the next transaction and the oldest
> snapshot - they should be reasonably close. If they're a couple
> thousand transactions apart - or even tens of thousands apart, youneed
> to find out where the long-lived transaction is started and stop itor
> change it to a read-only read-committed transaction.tell you
>
> For more information, run gstat -r -a >gstat.txt That will
> how many back versions of records are being retained and howselective
> your indexes are.value
>
> If you see that many back versions (max versions:) are being
> retained, then you can be quite sure that the problem is holding a
> snapshot transaction open for hours.
>
> Indexes with lots of duplicates on a single value (i.e. with a
> of max dup: greater than 20,000) have a significant impact on thecost
> of garbage collection. Firebird 2 corrects the problem. Untilthen,
> make the index more selective by making it compound. If, forexample,
> you have a database of students and you have a field for gradewhich
> allows only thesevalues: 'freshman', 'sophomore', 'junior', 'senior',
> rather than putting an index on that field alone, make an index onindexes
> grade, student-id. It's perfectly useful for retrieving grade
> information and doesn't slow down garbage collection.
>
> If you are deleting all the records from a table, disable the
> before you start, be sure that all concurrent transactions havestopped
> - yes that's hard - count the table, then reactivate the indexes.cause
> Dropping the table would be even more effective, though it would
> problems with dependencies and probably won't work in a multi-user
> environment.
>
> Regards,
>
>
> Ann