Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Inconsistency in divide-by-zero behavior between different value types

Please consider the following code and comments:

Console.WriteLine(1 / 0); // will not compile, error: Division by constant zero

int i = 0;
Console.WriteLine(1 / i); // compiles, runs, throws: DivideByZeroException

double d = 0;
Console.WriteLine(1 / d); // compiles, runs, results in: Infinity   

I can understand the compiler actively checking for division by zero constant and the DivideByZeroException at runtime but:

Why would using a double in a divide-by-zero return Infinity rather than throwing an exception? Is this by design or is it a bug?

Just for kicks, I did this in VB.NET as well, with "more consistent" results:

dim d as double = 0.0
Console.WriteLine(1 / d) ' compiles, runs, results in: Infinity

dim i as Integer = 0
Console.WriteLine(1 / i) '  compiles, runs, results in: Infinity

Console.WriteLine(1 / 0) ' compiles, runs, results in: Infinity

EDIT:

Based on kekekela's feedback I ran the following which resulted in infinity:

Console.WriteLine(1 / .0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001);

This test seems to corroborate the idea and a literal double of 0.0 is actually a very, very tiny fraction which will result in Infinity...

like image 906
Paul Sasik Avatar asked Jan 05 '11 22:01

Paul Sasik


3 Answers

In a nutshell: the double type defines a value for infinity while the int type doesn't. So in the double case, the result of the calculation is a value that you can actually express in the given type since it's defined. In the int case, there is no value for infinity and thus no way to return an accurate result. Hence the exception.

VB.NET does things a little bit differently; integer division automatically results in a floating point value using the / operator. This is to allow developers to write, e.g., the expression 1 / 2, and have it evaluate to 0.5, which some would consider intuitive. If you want to see behavior consistent with C#, try this:

Console.WriteLine(1 \ 0)

Note the use of the integer division operator (\, not /) above. I believe you'll get an exception (or a compile error--not sure which).

Similarly, try this:

Dim x As Object = 1 / 0
Console.WriteLine(x.GetType())

The above code will output System.Double.

As for the point about imprecision, here's another way of looking at it. It isn't that the double type has no value for exactly zero (it does); rather, the double type is not meant to provide mathematically exact results in the first place. (Certain values can be represented exactly, yes. But calculations give no promise of accuracy.) After all, the value of the mathematical expression 1 / 0 is not defined (last I checked). But 1 / x approaches infinity as x approaches zero. So from this perspective if we cannot represent most fractions n / m exactly anyway, it makes sense to treat the x / 0 case as approximate and give the value it approaches--again, infinity is defined, at least.

like image 100
Dan Tao Avatar answered Nov 12 '22 08:11

Dan Tao


This is by design because the double type complies with IEEE 754, the standard for floating-point arithmetic. Check out the documentation for Double.NegativeInfinity and Double.PositiveInfinity.

The value of this constant is the result of dividing a positive {or negative} number by zero.

like image 2
Wesley Wiser Avatar answered Nov 12 '22 09:11

Wesley Wiser


A double is a floating point number and not an exact value, so what you are really dividing by from the compiler's viewpoint is something approaching zero, but not exactly zero.

like image 8
heisenberg Avatar answered Nov 12 '22 08:11

heisenberg