Subject How to optimize dynamic generated SQL?
Author Christian Kaufmann
Hi,

we have a query builder, that allows the user to set filters for a
query. Based on this information we build SQL statements. A possible
statement looks like this:

select UNFALL_FEA.ERFASST, UNFALL_FEA.BFSNUMMER
from UNFALL_FEA UNF
where
UNF.UNFALL_FEAID in
(select UF1.UNFALL_FEAID from UNFALL_FEA UF1
join UNFALLOBJEKT UOB1 on UF1.UNFALL_FEAID = UOB1.UNFALLID
join UNFALLATTRIBUT UA1 on UOB1.UNFALLOBJEKTID = UA1.UNFALLOBJEKTID and
((UA1.KATALOGID = 230) and (UA1.ITEMID in (210)))
where UF1.ERFASST = '11.03.2002')

And the plan:

PLAN (UNF NATURAL)
PLAN JOIN (UA1 INDEX (IX_UNFALLATTRIBUT_KATALOG),UOB1 INDEX
(PK_UNFALLOBJEKT),UF1 INDEX (IX_UNFALL_FEA_PRIMARY2))


UNFALL_FEA > 20'000 records
UNFALLOBJEKT > 35'000 records
UNFALLATTRIBUT > 550'000 records

The subselect result is only 8 records, but it could be several hundred too.

Right now there is only one subselect, but we can have more than one,
combined with AND and/or OR.

My question is, how can I improve the SQL? Or should I run each
subselect first and collect the id's in my application. Then union /
intersect the different id sets and finally get the selected records
with the result id set and a new query?

cu Christian