Subject | Re: [ib-support] Re: 3 * 1/3 = 0 ??? |
---|---|

Author | Raymond Kennington |

Post date | 2002-08-30T19:48:05Z |

The C language also treats 1/3 as 0.

rogervellacott wrote:

Historically, 3 was a symbol to represent a whole number of objects. Later, notation was

introduced to represent fractional parts of numbers less than one. Today we use the dot or

period.

In Foundations of Mathematics, the development from whole numbers to decimal

representations of fractions has many steps. The notation is not important, but the

representation of 3 is different to the representation of 3.0, although one can propose

several ways to map integers to reals.

The point of this is that 3 is a numeral that represents an integer and an extra symbol is

required to make it otherwise.

Integers and floating-point numbers are stored in different ways in the computer and

operated on with different operators for integers and floats; different registers and

different numbers of registers are used and even a separate floating-point unit is

available in some architectures. In order to operate on integers as floats they must first

be converted.

Now, given that, it is up to the compiler-designer to decide what is to be done when

calculating 1/3, which has the natural computer interpretation of dividing an integer by

an integer. The decision is not whether to treat the numerals as integers or floats, but

whether to treat the division operator / as an integer operator or a floating-point

operator.

What languages do that treat 1/3 as 0 is to interpret the operator as an integer operator

because it operates on integers. This makes it a flexible (or overloaded) operator.

The developer must explicitly inform the compiler to treat the operator as float if that

is desired.

Amongst other things, this is quicker than converting to float, operating and converting

back. Historically, this could have slowed down the operation by a factor of 100 over

doing an integer operation.

However, given that there is an SQL standard, one ought to use it, regardless of any other

argument.

Raymond Kennington

rogervellacott wrote:

>In primary and highschool my maths teachers instilled in us that 3 is not the same as 3.0.

> The issue is not whether it is legitimate to have integer

> operations. It is whether literal values should be interpreted as

> integers by default.

>

> Division by literal values should default to the accurate result, not

> the inaccurate result. So 1/3 should default to 0.3333.. If I have

> declared a variable x as in integer, then it is reasonable that x/3

> (where x = 1) should return 0. But nowhere was I asked to declare

> that 1 was an integer, or that 3 was an integer. So this behaviour

Historically, 3 was a symbol to represent a whole number of objects. Later, notation was

introduced to represent fractional parts of numbers less than one. Today we use the dot or

period.

In Foundations of Mathematics, the development from whole numbers to decimal

representations of fractions has many steps. The notation is not important, but the

representation of 3 is different to the representation of 3.0, although one can propose

several ways to map integers to reals.

The point of this is that 3 is a numeral that represents an integer and an extra symbol is

required to make it otherwise.

Integers and floating-point numbers are stored in different ways in the computer and

operated on with different operators for integers and floats; different registers and

different numbers of registers are used and even a separate floating-point unit is

available in some architectures. In order to operate on integers as floats they must first

be converted.

Now, given that, it is up to the compiler-designer to decide what is to be done when

calculating 1/3, which has the natural computer interpretation of dividing an integer by

an integer. The decision is not whether to treat the numerals as integers or floats, but

whether to treat the division operator / as an integer operator or a floating-point

operator.

What languages do that treat 1/3 as 0 is to interpret the operator as an integer operator

because it operates on integers. This makes it a flexible (or overloaded) operator.

The developer must explicitly inform the compiler to treat the operator as float if that

is desired.

Amongst other things, this is quicker than converting to float, operating and converting

back. Historically, this could have slowed down the operation by a factor of 100 over

doing an integer operation.

However, given that there is an SQL standard, one ought to use it, regardless of any other

argument.

> makes SQL arithmetic quite esoteric, and accident prone.Quite common, actually. IIRC, FORTRAN and Cobol, as well as C & C++ do the same.

Raymond Kennington