|Subject||Re: [Firebird-Java] Re: Only real types in FBDatabaseMetaData.getTypeInfo|
I feel like we are talking at cross purposes here, for which I apologize.
On 2002.11.19 09:11:34 -0500 Blas Rodriguez Somoza wrote:
> Hello David
> > > There are two problems in the mapping.
> > >
> > > The first problem is that the typeInfo serves to map SQL types to
> > > java.sql.Types, and can be many to one but not one to many.
> > > Each SQL type must have a row in the table and only one.
> > Is this in the spec?
> The api docs about getTypeInfo begins with
> Retrieves a description of all the standard SQL types supported by this
> In this sentence the word standard is usually readed as DB standard SQL
> types and not SQL-92/99 etc standard types.
> > I looked in my sql reference, and apparently I made a mistake,
> > NUMERIC and DECIMAL. NUMERIC requires exact precision, DECIMAL allows
> > greater precision than that requested. I haven't checked yet to verify
> > that Long.MAX_VALUE can be stored in DECIMAL(18.0) but it is at least
> > allowed by the sql spec.
> In firebird, BIGINT values can be stored on decimal or numeric fields,
> but only because Firebird does not enforce precision,
> something that I expect will change in the future.
I have now verified that DECIMAl(18,0) will store and retrieve
Long.MAX_VALUE and Long.MIN_VALUE. Why would firebird change the
implementation to reduce the allowed precision here, since there is no
requirement to do so? There is such a requirement for NUMERIC, but I see
no reason to go to the extra computation for DECIMAL.
> Let me add another problem your change creates, in the metadata of a
> table the driver can't know if a numeric/decimal(18,0) column
> is a generic numeric that maps to numeric, or numeric(18,0) that maps to
> BIGINT, so the DatabaseMetaData.getTypeInfo() don't agree
> with databaseMetaData.getColumns() and resultSet.getTypeName() when the
> type used is NUMERIC(18,0). There are two types in
> getTypeInfo that only maps to one in the table/resultSet metadata, so the
> metaData implementation is broken.
This is the same claim that started this thread, but I haven't seen the
evidence that this is against spec. Looking at the javadoc for the
relevant methods, I still don't see any evidence that my changes are
against spec. Please explain.
> Because the discussion about standards could take a long time and be
> useless, let take another direction to try to solve this
> question ASAP.
> The fact is I expend a lot of time to implement standard things in the
> meta data information (not everything they ask for) those
> guys need, your change break the implementation, and I think you don't
> need it. Also if you need this mapping for your own purposes
> you can maintain it outside the CVS.
I wouldn't have put the change in if I didn't need it, and it is for
compatibility with another project (tjdo), not a proprietary project, so I
can't maintain it privately.
> I'm working now to make the driver version/dialect aware, when I finish
> with that I will add the BIGINT type and then you will get
> the BIGINT for your trials.
How do you plan to do this? Won't it require a change to the engine?
> Actually, the driver is buggy for dialect 1 or 2 because it does not take
> into account the db version or dialect when it is needed.
> There are several places where the dialect must be taken into account
> mainly in the metadata. For instance the typeInfo version must
> take into account the db version, for dialect 1 and 2 the max precision
> for NUMERIC and DECIMAL is 10 whether for dialect 3 is 18.
I'm less than convinced that the driver needs to be made more complicated
to support dialect 1. Please explain why we need to support dialect 1
databases at all.
> Please, can you agree with that and remove the mapping from CVS?.
I will look and see if I can implement an alternate solution in tjdo: if so
I will remove the BIGINT mapping.
> Blas Rodriguez Somoza
> To unsubscribe from this group, send an email to:
> Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/