Subject | exec. time = f(table count). f() ? |
---|---|
Author | jbouvatt |
Post date | 2005-06-29T21:13:56Z |
Hello,
- Suppose I have some parameterized SQL SELECT clause that queries a
table and crafted so that an index is always used.
- Suppose the said table is inserted so that the said index'
selectivity remains the same whatever the size of the table. i.e.
whatever the params passed to the SELECT, it will always return
between 0 and 1000 rows.
Is there some function to calculate the query execution time according
to the size of the table ?
For the moment, I haven't noticed much difference whether the table is
filled with a million or a hundred million rows. Before trying for
real, I'd like to have an idea of the theoretical performance drop I
should expect from the table's growth. (eg : 500 Millions, 1000
millions, etc...).
TIA
--
Jerome
- Suppose I have some parameterized SQL SELECT clause that queries a
table and crafted so that an index is always used.
- Suppose the said table is inserted so that the said index'
selectivity remains the same whatever the size of the table. i.e.
whatever the params passed to the SELECT, it will always return
between 0 and 1000 rows.
Is there some function to calculate the query execution time according
to the size of the table ?
For the moment, I haven't noticed much difference whether the table is
filled with a million or a hundred million rows. Before trying for
real, I'd like to have an idea of the theoretical performance drop I
should expect from the table's growth. (eg : 500 Millions, 1000
millions, etc...).
TIA
--
Jerome