Subject | Re: Disable the sweeping doesn't help the performace |
---|---|
Author | Adam |
Post date | 2006-03-04T03:40:51Z |
> I am using Firebird 1.5.3. I have a database that is less than 100On the contrary, disabling sweeping may allow garbage to build up to
> megabytes. I heard disable the sweeping may boost the performance.
the point where it chokes. The only thing it will achieve is to
prevent it from occuring when the gap between the oldest and oldest
active transaction reaches 20K. The only time you would really disable
it (set to 0) is if you wanted to manually do it outside of peak times.
> Thus, I try to set the sweeping interval to 0 and start using theround.
> database in the following manners:
>
> When performance a lengthy stock costing calculation on the database,
> the performance is acceptable on the first time. But perform the same
> calculation again for the second time, the performance degrade
> dramatically. So does 3rd and 4th times:
>
> First time: 368 seconds
> Second time: 989 seconds
> Third time: 1013 seconds
> Fourth time: 1060 seconds
>
> The following is the final gstat statistics. The "Oldest Transaction"
> was below 1000 on the first round and increased to 4705 on the last
>These are the stats of interest for garbage collection.
> Oldest transaction 4704
> Oldest active 4705
>Delete doesn't remove the record. It flags it as 'deleted' but the
> The costing calculation perform "start transaction" and "Commit
> Transaction" for each stock item calculated. There are around 300
> stock items. Each transaction involve avarage of 1000 database update
> operations (DELETE/INSERT).
record isn't actually removed from the table or from any index until
EVERY transaction that could possibly be interested in it is committed
(or rolled back). The transaction that reads the record and notices
that the record is no longer interesting (ie garbage) is given the
task of cleaning it up (removing the record and updating the relevant
indices). Due to the way Firebird 1.5 handles duplicates in indices, a
index which contains a lot of duplicates is 'expensive' to maintain.
Firebird 2 (beta) greatly improves this but it is not yet suitable for
production use. You could certainly use it in your test environment to
prove this is the problem.
There are a number of things that are possible in this situation:
1) After you commit the transaction, start another one that does a
select count(*) from that table. This transaction will incur the
expense of garbage collection.
2) If you are deleting the entire table, you could instead drop and
recreate it. This is also a lot quicker.
3) You can change the index to make it more selective by adding the
primary key field to the end of it. This is just as helpful to
Firebird but the garbage collection is much faster. If it is a foreign
key, then this approach is not really possible.
>Backup does not make a backup copy of the deleted records, so of
> My question is the Next transaction seems not too far from the OAT
> (below 20000). Why does the performance degrade so much on the second
> round? If I restore and backup the database, the performance will
> back to normal.. Could anyone advice on how to proper use the manual
> sweeping?
course it will be fast after a restore. If you do not use the -g
parameter of gbak, then the backup process must clean up the database
anyway and this will take just as long.
You can sweep at any time using gfix -sweep
Adam