Subject Poor selectivity in foreign keys
Author Salvatore Besso
hello all,

maybe it was already discussed in the past, but running gstat against a database
I'm currently developing I have noticed that all foreign keys have a very poor
selectivity. I expected this because the same foreign key is duplicated several
times.

Now, I use foreign keys to implement cascade deletes, but if I remember well
someone (maybe Helen?) told some time ago that in this case it would be better
to implement cascade deletes with a stored procedure.

This is only a partial extract of my gstat session showing only a couple of the
FK's with poor selectivity:

Index FK_PROVINCE_TOTALS (2)
Depth: 2, leaf buckets: 74, nodes: 49220
Average data length: 0.00, total dup: 48760, max dup: 106
Fill distribution:
0 - 19% = 1
20 - 39% = 0
40 - 59% = 0
60 - 79% = 0
80 - 99% = 73

Index FK_QSO (5)
Depth: 2, leaf buckets: 127, nodes: 85387
Average data length: 0.00, total dup: 84927, max dup: 451
Fill distribution:
0 - 19% = 0
20 - 39% = 0
40 - 59% = 0
60 - 79% = 1
80 - 99% = 127

Do you think that it would be better to drop FK's and implement a stored
procedure and should I expect a significant performance increase doing this?

The tables involved in the cascade are five, one main table and other four
tables for which FK's are declared.

Thanks
Salvatore