Subject | Re: [firebird-support] Re: Memory use and FB 1.5 |
---|---|
Author | Helen Borrie |
Post date | 2004-06-30T11:02:10Z |
At 10:42 AM 30/06/2004 +0000, you wrote:
around 8 to 10,000 per commit - that COMMIT statements, not COMMIT RETAINING.
It would be worth gstat-ting the indexes on that table. If the large
insert affected the geometry of the indexes in a signficantly unhelpful
way, that could account for slow searches for the next users.
If you're alone on the system for doing large batches of inserts, try
deactivating indexes for the duration of the task and reactivating them
afterwards. It will make the batch faster; and reactivating them
afterwards will rebuild them. After that, run SET STATS from isql, and log
off.
/heLen
> > We'll need a bit more information. Can you give us the statisticsInserts don't generate garbage.
>Daniel Rail wrote:
> >
> > My suspicion is that it might be related to garbage collection,
> > because of the high number of records that you inserted. Did you do
> > the insert within one single transaction or did you commit after a
> > certain amount of records?
>I did a commit after each 50 inserts.That's too frequent. Next time you run it, think in terms of batches of
around 8 to 10,000 per commit - that COMMIT statements, not COMMIT RETAINING.
It would be worth gstat-ting the indexes on that table. If the large
insert affected the geometry of the indexes in a signficantly unhelpful
way, that could account for slow searches for the next users.
If you're alone on the system for doing large batches of inserts, try
deactivating indexes for the duration of the task and reactivating them
afterwards. It will make the batch faster; and reactivating them
afterwards will rebuild them. After that, run SET STATS from isql, and log
off.
/heLen