I know that signed zeros are used to distinguish underflow from positive or negative numbers, and so it's worth distinguishing them. Intuitively I feel that the absolute value of -0.0
should be 0.0
. However, this is not what Haskell says:
Prelude> abs (-0.0)
-0.0
For what it's worth, Python 2.7 disagrees:
>>> -0.0
-0.0
>>> abs(-0.0)
0.0
Is this a bug, or a part of the standard?
The behaviour you describe is definitely inconsistent with the IEEE 754 standard, which in its most recent incarnation says:
abs(x) copies a floating-point operand x to a destination in the same format, setting the sign bit to 0 (positive).
That's in section 5.5.1 of IEEE 754-2008, entitled 'Sign bit operations'. Though I can't give a link to the standard itself, you can see roughly the same language in the last available public draft of the standard, in section 7.5.1. (In general the standard differs quite significantly from that draft, but this bit's almost unchanged.)
That doesn't make it a bug in Haskell unless Haskell specifically claims to follow the IEEE 754 standard, and moreover claims that the implementation of abs
in the Prelude should map to the IEEE 754 abs
function. The standard merely requires that the abs
operation must be provided, but says nothing about how it might be spelled.
This is the behavior defined in the Haskell report.
6.4.4 Magnitude and Sign
A number has a magnitude and a sign. The functions
abs
andsignum
apply to any number and satisfy the law:abs x * signum x == x
For real numbers, these functions are defined by:
abs x | x >= 0 = x | x < 0 = -x signum x | x > 0 = 1 | x == 0 = 0 | x < 0 = -1
Since negative zero is equal to zero, -0.0 >= 0
is true, so abs (-0.0) = -0.0
. This is also consistent with the definition of signum
, since -0.0 * 0.0 = -0.0
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With