Subject Re: [firebird-support] Re: NUMERIC or DECIMAL?
Author Helen Borrie
At 08:50 PM 22/10/2004 +0000, you wrote:

> > Quote:
> > The theoretical difference between NUMERIC and DECIMAL is that
> > store exactly p digits, whereas DECIMAL will store not fewer than
>p digits.

Theoretical. In practical terms, NUMERIC and DECIMAL are identical. There
are very subtle differences when the precision is less than 5.

>So my question is...where would you use NUMERIC rather than DECIMAL?

Except at very low precision, no difference.

>If you used DECIMAL(3,2) and tried to store 1234567.89, would it not
>allow it?

Now, does one answer that question with Yes or No? :-)

The precision specifies how many significant digits the number has,
regardless of the scale (the number of digits following the decimal
point). So you would expect this statement to cause an arithmetic
overflow. However, with DECIMAL, it doesn't. If you defined it as
NUMERIC, you would get an overflow with this number.

However, it's not so simple. In fact, NUMERIC(3,2) will accept 123.45 -
which is still not a NUMERIC(3,2) by the rules. Unlike DECIMAL(3,2), it
won't accept 1234.56. (Fb/IB do not conform to standards re fixed numerics!!)

You might be interested to read Geoff Worboys' Tech Info sheet on the
subject of numeric types, at

I think Myles was confusing the SQL NUMERIC type with the Paradox Numeric,
which is a floating-point type. (Paradox doesn't have any fixed-precision
types: you have to roll your own by dividing integers).

Re whether "precision should be scale and vice versa" - precision
determines how accurately a number can be represented - if you like, how
many of the significant digits in a number can be relied on. With fixed
precision numbers, a.k.a. "exact precision", every digit can be counted on
to be exact. With floating point numbers, precision "fades" from left to

Hence, with floats, the more capacity the stored number has, the "higher"
its precision. The closer the decimal point is to the left, the closer the
stored number is to "exact". A floating point number is never "exact",
since it's unknown what the scale will be - hence the "floating" decimal point.

A FLOAT type (stored as a 32-bit integer) has reliable precision up to
about six or seven sig. digits. So 0.1234567 is more "exact" than 123456.7,
but they are likely to behave like 0.123457 and 123457.0, respectively. It
might be no less "accurate" to simply truncate the leftmost digits.

By contrast, a NUMERIC(7,7) stores the first number exactly and a
NUMERIC(7,1) stores the second number exactly. The second argument - the
scale (s) - says "this 32-bit integer has to be divided by 10^s when used
in a calculation".

The DOUBLE PRECISION type is a f.p. type that has a "capacity" of up to 18
or 19 sig. digits, i.e. the absolute precision of a 64-bit integer. The
first 15 can be considered reliable.

Ann's rule of thumb is useful: use floating point types for things you
measure, use fixed precision types for things you count (like money!!)

Before you mathematicians decide to start a flame war about floating point
arithmetic: yep, I'm well aware that this does not represent even close to
the whole story about floating-point precision and I'm not going to argue
about the anomalies that occur at the extremities. But I hope it's close
enough to suggest that, when you're dealing with very big and very small
numbers, consult the accountants or the scientists or the statisticians
about the numbers they want you to store and the games they want you to
play with precision and rounding on floating point types.