Subject | Re: [IB-Architect] Insert Speed |
---|---|
Author | Jan Mikkelsen |
Post date | 2000-04-08T01:22:12Z |
Jim Starkey <jas@...> wrote:
a good idea.
The thinking behind my question was trying to find out whether page
allocation was a bottleneck. I guess I'm also poking around for pieces of
#ifdef'd out code.
your statement about) that finding a free page a sequential search of this
page, rather than loading the structure into memory and maintaining it using
something like the buddy system (or whatever). Is that correct?
How are ACID properties maintained for page allocation within the database
file? Is there a mini log of some sort for maintaining the integrity of the
database file? If an in-core copy of the structure is used, of course it
will need to be committed to disk at appropriate points. The reason for
this question is that (I assume) when using page by page allocation, at
least one write is necessary for each successive page allocation to ensure
the consistency of the underlying database file. Allocation by extent would
(I expect) reduce the number of write necessary.
I don't have a very clear view of the semantics or structure of the database
file.
to tables, there is also the issue of the allocation of pages from the
operating system to the database.
On the allocation of pages from the OS to the database file, requesting
pages in larger chunks from the operating system can be faster, as well as
helping the underlying filesystem reduce fragmentation. On the level of the
database file, I don't think allocating in (say) 64KB or even 256KB chunks
is a big issue.
On the allocation of pages to tables (the subject of my question), I don't
know whether or not there is a win in larger allocations, although I expect
there is. Is fragmentation within large databases a big issue?
David Schnepper has posted that he recalls the bottleneck was searching for
pages within the table with enough free space to take the new record. What
does the structure for recording usage within a table look like? How
expensive is the maintenance?
Jan Mikkelsen
janm@...
>At 08:48 AM 4/8/00 +1000, Jan Mikkelsen wrote:I wasn't arguing for a larger default page size, although I think do that is
>>Has anything other than page at a time allocation ever been considered?
>
>Could you explain the thinking behind the question? It sounds
>like a argument for a larger (default) page size. 4K, maybe 8k,
>would make a great deal more sense for 1k.
a good idea.
The thinking behind my question was trying to find out whether page
allocation was a bottleneck. I guess I'm also poking around for pieces of
#ifdef'd out code.
>Successive page allocation isn't particularly expensive. TheI assume the PIP is a bitmap of free vs. used pages. I also assume (from
>page inventory page is essentially guarenteed to be sitting in
>memory and unless the PIP is chaffed, pages will come out
>sequentially (wish I could say that about non-extent based
>file systems).
your statement about) that finding a free page a sequential search of this
page, rather than loading the structure into memory and maintaining it using
something like the buddy system (or whatever). Is that correct?
How are ACID properties maintained for page allocation within the database
file? Is there a mini log of some sort for maintaining the integrity of the
database file? If an in-core copy of the structure is used, of course it
will need to be committed to disk at appropriate points. The reason for
this question is that (I assume) when using page by page allocation, at
least one write is necessary for each successive page allocation to ensure
the consistency of the underlying database file. Allocation by extent would
(I expect) reduce the number of write necessary.
I don't have a very clear view of the semantics or structure of the database
file.
>But to answer your question: Not to my knowledge. Shall we?If there's a payoff, then yes. While I was asking about allocation of pages
to tables, there is also the issue of the allocation of pages from the
operating system to the database.
On the allocation of pages from the OS to the database file, requesting
pages in larger chunks from the operating system can be faster, as well as
helping the underlying filesystem reduce fragmentation. On the level of the
database file, I don't think allocating in (say) 64KB or even 256KB chunks
is a big issue.
On the allocation of pages to tables (the subject of my question), I don't
know whether or not there is a win in larger allocations, although I expect
there is. Is fragmentation within large databases a big issue?
David Schnepper has posted that he recalls the bottleneck was searching for
pages within the table with enough free space to take the new record. What
does the structure for recording usage within a table look like? How
expensive is the maintenance?
Jan Mikkelsen
janm@...