Subject | Re: [Firebird-Architect] Index structures |
---|---|
Author | Jim Starkey |
Post date | 2003-06-07T23:28:48Z |
At 11:44 PM 6/7/2003 +0200, Arno Brinkman wrote:
reads. If
the benchmark has to read disk pages, your scheme is slower. If it's
running out
of cache, it's faster. The real world doesn't run out of cache.
I don't think it scales. The bigger the index the more you're hurt by the
fluffier
index. If your index goes an extra level, you lose big time. As the index
size
increases relative to cache size, your cpu advantage is lost to increased
disk i/o.
CPU double in speed every 18 months. Disks double in speed every 25 years.
Trading increased disk traffic for lower CPU utilization isn't, to my old
foggie
thinking, a win.
There are secondary issues of developer-friendliness concerning index stability
(versus rebuild) when the scale factor or datatype of the index change. Most
folks argue performance over flexibility. The fact that there are so many
Firebird
developers suggest that maybe I got it right... A reason person, of
course, may
differ.
Jim Starkey
>Hi Jim,Pretty much what I expected. The index is fluffier, requiring more page
>
> > I'd like to see the performance numbers that show an improvement on some
> > reproducible workload.
>
>Current index structure :
>-----------------------------------
>[first time]
>Current memory = 2755776
>Delta memory = 422052
>Max memory = 2826184
>Elapsed time= 14.03 sec
>Buffers = 10000
>Reads = 5273
>Writes 0
>Fetches = 1400671
>---
>[second time]
>Current memory = 3003324
>Delta memory = 0
>Max memory = 3364352
>Elapsed time= 4.85 sec
>Buffers = 10000
>Reads = 0
>Writes 0
>Fetches = 1400369
>-----------------------------------
>
>New index structure :
>-----------------------------------
>[first time]
>Current memory = 2757172
>Delta memory = 423816
>Max memory = 2825812
>Elapsed time= 14.22 sec
>Buffers = 10000
>Reads = 5665
>Writes 0
>Fetches = 1601228
>---
>[second time]
>Current memory = 2939952
>Delta memory = 0
>Max memory = 3300980
>Elapsed time= 2.46 sec
>Buffers = 10000
>Reads = 0
>Writes 0
>Fetches = 1600926
>-----------------------------------
reads. If
the benchmark has to read disk pages, your scheme is slower. If it's
running out
of cache, it's faster. The real world doesn't run out of cache.
I don't think it scales. The bigger the index the more you're hurt by the
fluffier
index. If your index goes an extra level, you lose big time. As the index
size
increases relative to cache size, your cpu advantage is lost to increased
disk i/o.
CPU double in speed every 18 months. Disks double in speed every 25 years.
Trading increased disk traffic for lower CPU utilization isn't, to my old
foggie
thinking, a win.
There are secondary issues of developer-friendliness concerning index stability
(versus rebuild) when the scale factor or datatype of the index change. Most
folks argue performance over flexibility. The fact that there are so many
Firebird
developers suggest that maybe I got it right... A reason person, of
course, may
differ.
Jim Starkey