Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Divide by Zero: Infinite, NaN, or Zero Division Error? [closed]

Why isn't 1/0 == Inf in every language? Is that not the most mathematically correct response?

All the languages I'm familiar with are capable of expressing both Infinite and NaN values, so why would they choose to throw an error or return NaN instead? Is it just to make life harder for scientific application developers? ;-)

Update: We should maybe close this question because I incorrectly thought that 1f/0f == Float.NaN in Java. But I was wrong: it does correctly return Float.Infinity. That was my main confusion; the fact that some languages throw errors instead is understandable, so long as no language returns NaN.

like image 644
Neil Traft Avatar asked Jul 03 '11 15:07

Neil Traft


1 Answers

Apart from the fact that 1 / 0 == inf is mathematically highly questionable, the simple reason why it doesn’t work in most programming languages is that 1 / 0 performs an integer division almost universally (exceptions exist).

The result is an integer, and there is simply no way of encoding “infinity” in an integer. There is for floating point numbers, which is why a floating-point division will actually yield an infinite value in most languages.

The same is true for NaN: while the IEEE floating point standard defines a bit pattern that represents a NaN value, integers don’t have such a value; thus such values simply cannot be represented as an integer.

like image 128
Konrad Rudolph Avatar answered Sep 21 '22 05:09

Konrad Rudolph