Subject Re: [IB-Architect] Insert Speed
Author Jim Starkey
At 11:22 AM 4/8/00 +1000, Jan Mikkelsen wrote:
>
>
>I assume the PIP is a bitmap of free vs. used pages. I also assume (from
>your statement about) that finding a free page a sequential search of this
>page, rather than loading the structure into memory and maintaining it using
>something like the buddy system (or whatever). Is that correct?
>

A PIP (page inventory page) is indeed a bitmap of free vs. used
pages. Each pip keeps a low water mark to avoid repetitive
searching. It could made smarter at the cost of density, buth I
don't there there is a bottleneck here.

>How are ACID properties maintained for page allocation within the database
>file? Is there a mini log of some sort for maintaining the integrity of the
>database file? If an in-core copy of the structure is used, of course it
>will need to be committed to disk at appropriate points. The reason for
>this question is that (I assume) when using page by page allocation, at
>least one write is necessary for each successive page allocation to ensure
>the consistency of the underlying database file. Allocation by extent would
>(I expect) reduce the number of write necessary.
>

Could you explain ACID?

The cache maintains precedence relationships to ensure that the PIP
is written before any of the pages allocated from it, but it is
far from a 1:1.

I did a test about a lifetime back of the performance gain
by ignoring careful write altogther. The difference was in
the order of a couple of percentage points -- not enough to
get excited about.

>I don't have a very clear view of the semantics or structure of the database
>file.
>

That's a big question, sir. The general page types are:

Header. One per database
Page Inventory Page
Transaction Inventory Page
Pointer Page (points to data pages in table)
Data page
Index root page (one per table)
Index page
Generator pages

Pick your favorite and ask.

>
>On the allocation of pages from the OS to the database file, requesting
>pages in larger chunks from the operating system can be faster, as well as
>helping the underlying filesystem reduce fragmentation. On the level of the
>database file, I don't think allocating in (say) 64KB or even 256KB chunks
>is a big issue.
>

On VMS (ugh, oh double ugh) we do a pretty much standard extension
algorithm (double the size up to a limit). I didn't know that
page allocation was a Unix concept? My understanding was that
Unix allocated pages on demand and that seeking to an arbitrary
pages and writing would allocate that page leaving holes in the
unreferenced space. Never look, though.

>On the allocation of pages to tables (the subject of my question), I don't
>know whether or not there is a win in larger allocations, although I expect
>there is. Is fragmentation within large databases a big issue?
>
>David Schnepper has posted that he recalls the bottleneck was searching for
>pages within the table with enough free space to take the new record. What
>does the structure for recording usage within a table look like? How
>expensive is the maintenance?
>

Pointer pages are allocated to tables as needed (released only
when table is deleted). Data pages have the classical database
structure: a line index at one end holding offset on page and
length of each record segement. The engine remembers where it
stored the last record and starts looking for space there. The
pointer page also keep track whether a data page is known to be
full. Trying to keep more accurate statistics would be extremely
expensive in classic. I'd like to see classic retire to a
tropical island somewhere, but Charlies keeps coming up with
reasons to keep it.

Jim Starkey