|Subject||Re: question about optimizing group by|
> Sigh. If you will keep introducing new dimensions to the problem,Actually, they are very useful, because what I want is insight into
> my answers will get less and less useful.
how fb deals generally with a specific case, not tech. support on a
real-life problem. So when I said "assume db is on ramdrive", what I
wanted was abstracting away the access-cost problem.
Turns out access-cost > cpu cost because the fixed assumption that db
is on a slow media all the time. This alone answers many questions.
> Basically, if you're planning on releasing on RAM disk, spend theIdeally, but I'd rather design consciously than by trial and error, as
> time to benchmark and tune your queries against your data rather
> then speculating.
much as is feasible.
> If your groups consist of 10-20 records a FIRST (the hypotheticalAhaa :)
> random aggregate) doesn't save much over a sort. If the groups are
> 10,000 or more, it saves a lot.