Subject Re: [IB-Architect] Disk Bandwidth was License Question
Author Jim Starkey
At 12:59 PM 3/25/00 -0700, you wrote:
>From: Tim Uckun <tim@...>
>At the risk of sounding stupid...

A risk you're willing to assume.

>If IB is able to split up databases if they exceed 2 gigs why not just
>split up the database at arbitrary sizes and put different chunks on
>different drives. I suppose what is really needed is the ability to specify
>where the individual tables go so I can tune my database the way I want.
>I.e If I use two tables very frequently I want to be able to assign them
>their own drives. In the old days this was an easy thing to do because
>everything was not in one file. I guess it would not be so bad to return to
>the bad old days when every object was it's own file in some circumstances.

First of all, frequently used tables live in the cache, so nothing
is going to improve their access times.

User level tuning is the first resort of incompetents. The database
knows what it's doing, you don't. Expecting a human to tell a computer
how it should run its affairs is just plain dumb.

Classical database dogma says that placement control (clustered primary
indexes, for example) is critical to performance. This is hogwash.
Designing algorithms that at insensitive to tuning is what is important.

Interbase indexes stores record number in the index; index retrievals
return a structure called a sparse bitmap. Interbase traverses all
indexes (not must primary ones) in bitmap order, which is physical
order on disk, minimizing page reads by maximizing cache hits. This
strategy always wins.

The way to use multiple disks is not placement control but striping
so that successive page reads automatically go to a device not
in use. Well, duh. And maybe, just maybe, Linux will figure this
out someday.

Placement control always looses. Always. Its only redeeming
value is that you can blame poor performance on the user rather than
the database designer where it belongs.

Jim Starkey