Subject Re: Ignoring a FK index
Author Adam
Hi Alexandre,

> I share with you your perception that an FK should be declarative
> instead of implemented by triggers.

Part of the problem of the trigger approach (apart from the
constraint not being obvious) is that if I delete a record from
tableA, then I would have a before delete trigger to remove matching
records from tableB (to replicate cascade). However, due to MGA
another transaction started before I commit could insert a record
into tableB matching the record I have just deleted, and they will
not get an exception.

>
> I don't know how to make the optimzer choose another index based on
the
> same column :-( I think here will be a good point for a HINT
clause....
>

Even without the hint directive, can you think of any reason you
would ever want to use a FK index with poor selectivity where a
compound key starting with the FK field with perfect selectivity
exists?

> I leave with a lot of duplicate indexes on my tables because of
this.
>
> I have a table that holds the company data, in some sites I have 5
> records, in other just 1, and it have FK to almost all tables on my
> database, so you could bet it will have a lot of duplicates
(hundreds of
> thousands) in some tables.
>

Perhaps it is as simple as an improvement to the optimiser to ignore
any index with a selectivity worse than ____ [insert value]. Then
again, perhaps this is easier said then done.

> I know I am a purist on this aspect, but I like it that way, if I
found
> a place where it slows down considerably my queries and adding +0
to
> avoid the index use could not help, I will revise it, but until now
I
> could live with it.
>
> The case you provided should works equally in these two conditions:
> 1.) Adding +0 to avoid the index usage
> 2.) Have a way to declare a FK without the need of an index.
>
> I don't know why you gain in speed if you set the plan manually to
use
> your "test" index, I know it should improve the garbage colletion
> process, but the bitmap generated by the index scan should maps to
all
> (or almos all) data pages, so a natural read should perform equal.
A bit
> confused here based on your words that using the "test" index
performs
> better than no index at all. Are you sure that you don't see
the "fast
> first row" effect ? As I see it a full fetch should be equal (or
faster)
> then an indexed read that needs to read all data pages.
>

Yeah, it may have been caching that made the significant difference.
The actual query used a where clause as well on that same field so I
don't know if that makes a difference. It is a bit tricky because the
query is generated on the fly based on the available data, but I have
made a change to +0 to the foreign key field and this at least makes
it acceptable.

In any case, I have reduced the query time from about a minute to 10
seconds, and there is another section of code where this information
is parsed over which takes too long IMO, so I will focus on that
first. (The first rule of optimisation, work on whatever will give
you the best improvement).

Thanks for your help

Adam