Subject | IN predicate 1500 items limit |
---|---|
Author | Aitor Gómez González |
Post date | 2005-03-31T10:18:27Z |
Hi,
I call a selectable SP to get a collection of record IDs from a table,
then use a select against a view to get a lot of data related to the
selected IDs. Selection criteria is too complex to include all in the
same select.
My first approach was:
select
field_1, field_2, field_3, ..., field_n
from
my_table
where
my_table.id in (select ID from sp_get_my_table_ids(param_1, param2,
..., param_n)
but this was too slow, so I tried hard coding the IN:
select
field_1, field_2, field_3, ..., field_n
from
my_table
where
my_table.id in (id_1, id_2, id_3, ..., id_n)
this one it's a lot faster, but I found an error when launching
selects with more than 1499 elements in the IN set. I solved it
launching consecutive selects with sets of 1499 elements each.
my_table has a 200 thousand records now, but could have +10 million
when working at production. Field ID is bigint primary key.
What I want to know is: Is there a better (faster) way to do it? Will
I get any benefit decreasing the number of elements per select and
increasing the number of selects? And sorting the ID set before hard
coding it? What's the reason of the 1499 limit?
Thanks in advance,
Aitor.
I call a selectable SP to get a collection of record IDs from a table,
then use a select against a view to get a lot of data related to the
selected IDs. Selection criteria is too complex to include all in the
same select.
My first approach was:
select
field_1, field_2, field_3, ..., field_n
from
my_table
where
my_table.id in (select ID from sp_get_my_table_ids(param_1, param2,
..., param_n)
but this was too slow, so I tried hard coding the IN:
select
field_1, field_2, field_3, ..., field_n
from
my_table
where
my_table.id in (id_1, id_2, id_3, ..., id_n)
this one it's a lot faster, but I found an error when launching
selects with more than 1499 elements in the IN set. I solved it
launching consecutive selects with sets of 1499 elements each.
my_table has a 200 thousand records now, but could have +10 million
when working at production. Field ID is bigint primary key.
What I want to know is: Is there a better (faster) way to do it? Will
I get any benefit decreasing the number of elements per select and
increasing the number of selects? And sorting the ID set before hard
coding it? What's the reason of the 1499 limit?
Thanks in advance,
Aitor.