|Subject||Re: [IBO] Another Simple question|
> Helen, I considered placing the commit outside of the loop, butFrom my experience the server has no real problem with large volumes
> because of my limited understanding of what the server does in the
> background, I thought that the transaction would consume plenty
> resources if this process was run over a couple of hundred thousand
> records. The way I understand it, is that the server will log all
> changes at transaction start, and release these resources when the
> transaction is committed or rolled back.
of changes inside a transaction - I've imported over a million records
inside one transaction with no problems at the server.
However if you get near the end of such a long import and decide to
try and rollback it can be time to go make a coffee - actually lunch
would be better ;-) (Commit happens very quickly.)
The only times I have seen the server really hit memory problems was
with a complex select statement returning a few hundred thousand
records. The select had several embedded selects and various
computations and that appeared to cause the server to cache more and
more information as the records were cycled through at the client. I
ended up having to split the select into sections.
> It made sense to see the member and all his dependents as a unit ofThere are times when it would be nice to have transaction within a
> work needed to be handled together, not all members collectively.
transaction capability. However this is not possible in IB, so you
have to setup accordingly.
Geoff Worboys - TeamIBO