Subject | Re: [firebird-support] Limitting maximum database file size |
---|---|
Author | Helen Borrie |
Post date | 2003-06-23T12:57:30Z |
At 07:04 PM 23/06/2003 +0700, you wrote:
solution is known as "throwing the baby out with the bathwater".
The process flow is well hidden by your pseudocode, but your hard commit
should NOT occur for every row. One of your loops (and only you can know
which one) should be counting rows updated and, when this count gets to
10,000, you call commit. The next round of garbage collection will clear
out 10,000 obsolete rows, thus making space for another 10,000. The GC
thread just hangs around WAITING to get its massive jaws around 10,000
chunks of junk.
heLen
> > if you never call a hard Commit then there is absolutely NORight button for the cause of the bloat, wrong button for fixing it. Your
> > garbage collection happening.
> > ... deleted ...
>
>Yup!! That's the answer, hard COMMIT, without RETAIN. After I commented
>the COMMIT RETAIN and placed a COMMIT after the most inner loop, the
>problem gone away. I got a persistent database file size, though the
>application run slower (it took about 7 mins to finish the loop). As
>long as it run stable and consistent, that's ok for me. :)
>
>Thank you, Helen. :-) All your explanation helps me much. And also to
>other people here, thanks for the responses.
solution is known as "throwing the baby out with the bathwater".
The process flow is well hidden by your pseudocode, but your hard commit
should NOT occur for every row. One of your loops (and only you can know
which one) should be counting rows updated and, when this count gets to
10,000, you call commit. The next round of garbage collection will clear
out 10,000 obsolete rows, thus making space for another 10,000. The GC
thread just hangs around WAITING to get its massive jaws around 10,000
chunks of junk.
heLen