Why isn't 1/0 == Inf
in every language? Is that not the most mathematically correct response?
All the languages I'm familiar with are capable of expressing both Infinite and NaN values, so why would they choose to throw an error or return NaN instead? Is it just to make life harder for scientific application developers? ;-)
Update: We should maybe close this question because I incorrectly thought that 1f/0f == Float.NaN
in Java. But I was wrong: it does correctly return Float.Infinity
. That was my main confusion; the fact that some languages throw errors instead is understandable, so long as no language returns NaN.
Apart from the fact that 1 / 0 == inf is mathematically highly questionable, the simple reason why it doesn’t work in most programming languages is that 1 / 0
performs an integer division almost universally (exceptions exist).
The result is an integer, and there is simply no way of encoding “infinity” in an integer. There is for floating point numbers, which is why a floating-point division will actually yield an infinite value in most languages.
The same is true for NaN: while the IEEE floating point standard defines a bit pattern that represents a NaN value, integers don’t have such a value; thus such values simply cannot be represented as an integer.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With