Subject | Re: [IB-Architect] Fw: Slow deletion on large tables |
---|---|
Author | Ann Harrison |
Post date | 2000-05-28T16:16:43Z |
At 01:43 AM 5/27/00 -0400, Claudio Valderrama C. wrote:
Without reproducing the test (which I will do, but not today) I'd guess they
have some indexes with significant numbers of duplicates. The process of
garbage collecting a blob is this: if a record contains a blob id, determine
which type of blob storage is used. If the blob is on page, release the
line index indicator. If the blob is on a page by itself, mark that page
as free in the page indicator. If the blob is on a series of pages, read
the blob index and free all the pages. Note that none of this requires
retrieving the blob page itself.
The index duplicate problem is architectural, but solutions exist.
Ann
Claudio - I can't get to the news group at the moment, could you forward
this there?
> Hello, I need comments on that, assuming it's worth the time, ofI do not believe that the problem is with blobs, but rather with the indexes.
> course.
> Is this scenario a natural consequence of the architecture, a bug
> in the
>engine or a misuse of the developers?
> It appeared in kinobi.performance and I was curious about these
> results.
Without reproducing the test (which I will do, but not today) I'd guess they
have some indexes with significant numbers of duplicates. The process of
garbage collecting a blob is this: if a record contains a blob id, determine
which type of blob storage is used. If the blob is on page, release the
line index indicator. If the blob is on a page by itself, mark that page
as free in the page indicator. If the blob is on a series of pages, read
the blob index and free all the pages. Note that none of this requires
retrieving the blob page itself.
The index duplicate problem is architectural, but solutions exist.
Ann
Claudio - I can't get to the news group at the moment, could you forward
this there?
>C.
>
>Michael Stratmann <Stratmann@...> wrote in message
><392EC554.7EAEDDBC@...>...
> > Hello!
> >
> > Jon schrieb:
> > >
> > > Hello,
> > >
> > > I'm having performance problems deleting records from a large table.
> > > Each record is small (6 integers) and contains a blob which is one 1k
> > > segment.
> > >
> > > In my prototype I add 20,000 records and then delete them again. The
> > > first time through, the commit for the delete finishes in a couple of
> > > seconds. The second time through, the commit takes up to a minute.
> >
> > We (Tilo Arnold and me) have tried it with 200.000 records and 10 kB
> > BLOBs. With the performance hints (queries instead of TTable, asc and
> > desc indices, set statistics) already implemented. :-/
> >
> > Delays are going ab to 30 Minutes. An we would like to store millions of
> > records. Now we are diskussing alternatives to BLOBs. Personally I
> > believe Interbase BLOBs are not a really good idea, if you want to
> > delete the BLOBs later.
> >
> > >
> > > I'm using embedded SQL in C++ on a NT machine. The memory usage remains
> >
> > Me too ;-)
> >
> > > pretty steady and the cpu usage remains high.
> >
> > Very similar.
> >
> > >
> > > The database appears to quickly become fragmented. It is unrealistic to
> > > perform a sweep after every transaction. How can I improve the
> > > performance of the deletion of these records?
> > >
> > > Jonathan Kinsey
> >
> > Michael
>
>
>------------------------------------------------------------------------
>Failed tests, classes skipped, forgotten locker combinations.
>Remember the good 'ol days
>http://click.egroups.com/1/4053/4/_/830676/_/959406715/
>------------------------------------------------------------------------
>
>To unsubscribe from this group, send an email to:
>IB-Architect-unsubscribe@onelist.com
>