Subject | Re: Ignoring a FK index |
---|---|
Author | Adam |
Post date | 2006-02-02T03:57:57Z |
Hi Alexandre,
constraint not being obvious) is that if I delete a record from
tableA, then I would have a before delete trigger to remove matching
records from tableB (to replicate cascade). However, due to MGA
another transaction started before I commit could insert a record
into tableB matching the record I have just deleted, and they will
not get an exception.
would ever want to use a FK index with poor selectivity where a
compound key starting with the FK field with perfect selectivity
exists?
any index with a selectivity worse than ____ [insert value]. Then
again, perhaps this is easier said then done.
The actual query used a where clause as well on that same field so I
don't know if that makes a difference. It is a bit tricky because the
query is generated on the fly based on the available data, but I have
made a change to +0 to the foreign key field and this at least makes
it acceptable.
In any case, I have reduced the query time from about a minute to 10
seconds, and there is another section of code where this information
is parsed over which takes too long IMO, so I will focus on that
first. (The first rule of optimisation, work on whatever will give
you the best improvement).
Thanks for your help
Adam
> I share with you your perception that an FK should be declarativePart of the problem of the trigger approach (apart from the
> instead of implemented by triggers.
constraint not being obvious) is that if I delete a record from
tableA, then I would have a before delete trigger to remove matching
records from tableB (to replicate cascade). However, due to MGA
another transaction started before I commit could insert a record
into tableB matching the record I have just deleted, and they will
not get an exception.
>the
> I don't know how to make the optimzer choose another index based on
> same column :-( I think here will be a good point for a HINTclause....
>Even without the hint directive, can you think of any reason you
would ever want to use a FK index with poor selectivity where a
compound key starting with the FK field with perfect selectivity
exists?
> I leave with a lot of duplicate indexes on my tables because ofthis.
>(hundreds of
> I have a table that holds the company data, in some sites I have 5
> records, in other just 1, and it have FK to almost all tables on my
> database, so you could bet it will have a lot of duplicates
> thousands) in some tables.Perhaps it is as simple as an improvement to the optimiser to ignore
>
any index with a selectivity worse than ____ [insert value]. Then
again, perhaps this is easier said then done.
> I know I am a purist on this aspect, but I like it that way, if Ifound
> a place where it slows down considerably my queries and adding +0to
> avoid the index use could not help, I will revise it, but until nowI
> could live with it.use
>
> The case you provided should works equally in these two conditions:
> 1.) Adding +0 to avoid the index usage
> 2.) Have a way to declare a FK without the need of an index.
>
> I don't know why you gain in speed if you set the plan manually to
> your "test" index, I know it should improve the garbage colletionall
> process, but the bitmap generated by the index scan should maps to
> (or almos all) data pages, so a natural read should perform equal.A bit
> confused here based on your words that using the "test" indexperforms
> better than no index at all. Are you sure that you don't seethe "fast
> first row" effect ? As I see it a full fetch should be equal (orfaster)
> then an indexed read that needs to read all data pages.Yeah, it may have been caching that made the significant difference.
>
The actual query used a where clause as well on that same field so I
don't know if that makes a difference. It is a bit tricky because the
query is generated on the fly based on the available data, but I have
made a change to +0 to the foreign key field and this at least makes
it acceptable.
In any case, I have reduced the query time from about a minute to 10
seconds, and there is another section of code where this information
is parsed over which takes too long IMO, so I will focus on that
first. (The first rule of optimisation, work on whatever will give
you the best improvement).
Thanks for your help
Adam