Subject Re: [firebird-support] Limitting maximum database file size
Author Svein Erling Tysvaer
At 11:11 24.06.2003 +0700, you wrote:
> > Right button for the cause of the bloat, wrong button for
> > fixing it. Your solution is known as "throwing the baby
> > out with the bathwater".
>That's a new term for me. Can you explain it?

1. You have a dirty baby.
2. You fill a tub with water.
3. You put the dirty baby in the tub.
4. You wash your baby.
5. You throw away the muddy water.

Now, you've done a great job as a babysitter. But wait, where did the baby
go? Oops, maybe you did a serious mistake... Well, that's at least what a
Norwegian thinks the phrase means.

> > The process flow is well hidden by your pseudocode, but
> > your hard commit should NOT occur for every row. One of
> > your loops (and only you can know which one) should be
> > counting rows updated and, when this count gets to
> > 10,000, you call commit.
>Yup. I've optimized my code to do that. I think, beside the update
>process, the thing that slow my application down, is the commit itself.
>Because after a commit, the connection is broken and I have to
>re-connect to the server. I hope I can optimize my code so it can finish
>its task below 5 minutes.

Yes, commits are time consuming, but they do not break connections. What
does happen, is that the transaction ends, so depending on which tool you
use (IBObjects does it automatically in most cases), you may have to start
a new transaction (or 'restart your transaction'). If you do have sensible
(read: selective) indexes, don't do anything fancy and do it directly on
your server, you should be able to update 60000 records in less than a
minute (well, I haven't tried updates, but I have written a program
transferring (between two databases) three or four thousand records per
second, and I cannot imagine updates to be that much slower).