It seems inconsistent that in Scala, integer arithmetic div by zero throws a java.lang.ArithmeticException: / by zero
, but floating point arithmetic (1.0/0.0
) returns Double.Infinity
.
I understand that from a type perspective it might be tricky to have both a Double.Infinity and an Integer.Infinity, but I assume with scala's powerful type system they could have figured out some other way to resolve that aspect of the problem.
Is there something else I'm missing that would better explain these two differing behaviors?
This is a hardware limitation, not a software one, and the reason is simple: IEEE754 floating point arithmetic explicitly supports +Inf
and -Inf
, i.e. there are bit representations that correspond to those two ideas, and therefore it makes sense for them to be the result of 1.0/0.0
.
Integer arithmetic as implemented in common processors does not have an internal representation for infinities, and therefore has to throw an error.
You could implement an integral type that supports infinities in software, but it would be slower as a division operation would not correspond directly to the processor
's div
instruction. This is not sensible in my opinion: Integer div-by-zero errors are almost always due to (a) incorrect implementation of your algorithm or (b) failure to validate user input correctly, both of which are fundamental problems that have to be tackled before you compile your code, not at runtime via exceptions! In FP arithmetic on the other hand, you may easily run into div-by-zero when you don't (and really cannot) expect it: 1e-308/1e+308
has a correct value, but it is not representable as a double
even though both operands are.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With