Subject Index statistics in a particular case
Author fedetxf
I have a theoretical question regarding the way FB handles index
statistics.

I work with a table that has millions of records and I have an index
on a column that has only 22 possible different values, but typically
only 6 or 7 different values appear in the data.
The typical occurrence of each value is like this

0: 99% of the records
greater than 0: 1% of the records

Among that 1% that has value > 0 I want to find which ones have each
value.

So the typical queries look like this

select *
from table
where error > 0

or

select *
from table
where error = 3

Some joins are used too.

FB would assign a selectivity of 1/6, so when quering for a specific
value (error = 3) it would assume it would have to read 166.666 rows
when in fact they would be like 1.666 and when asking to read with
error = 0 it would think it needs to read the 166.666 when in fact it
would be reading 990.000 rows.

Would FB have the risk of using a different index based on the
statistics it would have in this situation?