Subject Best way to determine field precision?
Author woodsplitter.rm
I'm maintaining and enhancing an Interbase/Firebird driver for
Python ( http://kinterbasdb.sourceforge.net ). Python's standard
database API requires that the driver return the precision of SELECTed
fields. I'm talking about *precision* here, not *scale* (which is
already provided by the IB API as XSQLVAR.sqlscale). For example, in
a NUMERIC(18,2) field, precision is 18.

It's obvious that, given the relation name and field name of a
SELECTed field, one can determine its precision by querying the system
tables, but there are two problems with this approach:

1. The relation name and field name of a SELECTed field are not always
provided by the IB API, as in the case of a dynamically computed field
(e.g., "SELECT x/2 FROM SOME_TABLE").
Is there *any* way to programmatically determine the field's
precision in this case?

2. Performance is terrible.
In the worst case, one must query not just one or two system tables,
but several, first to determine whether XSQLVAR.relname refers to a
table, stored procedure, or view; then to retrieve the "internal name"
of the field in question (e.g., RDB$744); finally to retrieve the
field's precision from RDB$FIELDS.
Because of the requirements of the Python DB API 2.0, this process
must be performed for every field of every SELECT query when the query
is executed. It would be possible to cache the results, except that
the field's precision might in theory be ALTERed, invalidating the
cache.


So, is it possible to determine a SELECTed field's precision in
*every* case, and can it be done with decent performance? Ideally,
the API would provide XSQLVAR.sqlprecision along with
XSQLVAR.sqlscale.

Thanks.