Subject | Re: Count(*) on big tables |
---|---|
Author | Ali Gökçen |
Post date | 2004-05-05T15:55:39Z |
Hi,
Not too much.
50 million rows means millions of data pages pages. so, millions of
disk clusters.
Calculate your avarage access time of disk and multiplay it by
millions of clusters and add transfer time.
dbcache can't help for full scan of big tables, it causes extra time
loss by useless buffering.
Regards.
Ali
--- In firebird-support@yahoogroups.com, "Jerome Bouvattier"
<JBouvattier@I...> wrote:
Not too much.
50 million rows means millions of data pages pages. so, millions of
disk clusters.
Calculate your avarage access time of disk and multiplay it by
millions of clusters and add transfer time.
dbcache can't help for full scan of big tables, it causes extra time
loss by useless buffering.
Regards.
Ali
--- In firebird-support@yahoogroups.com, "Jerome Bouvattier"
<JBouvattier@I...> wrote:
> Hello,immedialtely. But
>
> I know Count(*) requires a full scan and can't return
> isn't 8 minutes a bit too much for a 50 millions rows table ?took as
>
> The first time, I thought it was due to GC, but subsequent calls
> long.
>
> Any hint ?
>
> Thanks.
>
> --
> Jerome