Subject | Re: [ib-support] Further to Performance Issue |
---|---|
Author | Helen Borrie |
Post date | 2002-12-06T22:29:01Z |
At 03:47 PM 06-12-02 +0000, you wrote:
YES. These codes will have a low distribution across the table (i.e. a lot
of records but very few variants of codes), causing the classic low
selectivity problem if you index them.
You can fix this problem by making each of these indexes a composite
consisting of the code + a highly selective column (e.g. the PK if it is
integer) in that order.
The other thing to look at, if you converted this from a desktop database,
is whether you have duplicate indexes on primary and/or foreign keys. If
you explicitly added indexes to these, then you have duplicates, since
Fb/IB create automatic indexes for keys. Get rid of the duplicate indexes.
It's likely that you still have substantial work to do on metadata to get
your database into good shape for client/server work. For example, if you
imported typical DBF metadata with hierarchical keys, then you have a
performance-crippling situation. Those should be split up so that primary
and foreign keys do not overlap. You might get some assistance from the
client/server paper which you can download from
www.ibobjects.com/TechInfo.html
I *don't* subscribe at all to the suggestion that you do these complex calc
runs on the client. Correct your structures and run these processes on the
server. As you rightly observed, that's what C/S is for.
You can't control the timing of interactive queries but your bigger
pre-defined calculation runs should be done in quiet times as a matter of
procedure. Apart from helping to distribute the load on the server, it
will also be necessary in order for you to do the big runs in SNAPSHOT
transaction isolation without blocking others' work on the same data.
Get hold of a tool with which you can monitor what the optimizer does with
your queries. The plan will show you which indexes are being used. IB_SQL
(free from the IB Objects site, above) has such a tool and you can also see
the PLAN for any query upon preparing, without needing to run the
query. There are several other such tools out there.
heLen
>One major change we made in our application design was to reduce theIf you have indexes on the code type pointers in the main table then
>number of tables we had for Codes and Rates. For example, in our
>original (DOS) version we would have had:
>
>Table 1 - Letters
>
>Code
>A
>B
>C
>D
>
>
>Table 2 - Numbers
>
>Code
>1
>2
>3
>4
>
>
>In the new system we created a CodeType Table
>
>CodeType
>Letters
>Numbers
>
>And a codes table that looks like
>
>Code CodeType
>A Letters
>B Letters
>C Letters
>1 Numbers
>2 Numbers
>3 Numbers
>
>
>This means that instead of having 30 small tables (5 to 1000 records)
>we now have 4 tables (others holding rates and ratetypes). It also
>means that we now are constantly selecting from the larger tables
>rather than from different smaller tables.
>
>We believe that proper indexes are in place.
>
>Could this affect performance to a large extent?
YES. These codes will have a low distribution across the table (i.e. a lot
of records but very few variants of codes), causing the classic low
selectivity problem if you index them.
You can fix this problem by making each of these indexes a composite
consisting of the code + a highly selective column (e.g. the PK if it is
integer) in that order.
The other thing to look at, if you converted this from a desktop database,
is whether you have duplicate indexes on primary and/or foreign keys. If
you explicitly added indexes to these, then you have duplicates, since
Fb/IB create automatic indexes for keys. Get rid of the duplicate indexes.
It's likely that you still have substantial work to do on metadata to get
your database into good shape for client/server work. For example, if you
imported typical DBF metadata with hierarchical keys, then you have a
performance-crippling situation. Those should be split up so that primary
and foreign keys do not overlap. You might get some assistance from the
client/server paper which you can download from
www.ibobjects.com/TechInfo.html
I *don't* subscribe at all to the suggestion that you do these complex calc
runs on the client. Correct your structures and run these processes on the
server. As you rightly observed, that's what C/S is for.
You can't control the timing of interactive queries but your bigger
pre-defined calculation runs should be done in quiet times as a matter of
procedure. Apart from helping to distribute the load on the server, it
will also be necessary in order for you to do the big runs in SNAPSHOT
transaction isolation without blocking others' work on the same data.
Get hold of a tool with which you can monitor what the optimizer does with
your queries. The plan will show you which indexes are being used. IB_SQL
(free from the IB Objects site, above) has such a tool and you can also see
the PLAN for any query upon preparing, without needing to run the
query. There are several other such tools out there.
heLen