From The Open Group Base Specifications Issue 7, IEEE Std 1003.1-2008:
The signbit() macro shall return a non-zero value if and only if the sign of its argument value is negative.
Why does signbit(-0)
return 0
? I just want to understand the logic behind this decision.
In signbit(-0)
:
0
is a constant of type int
.-0
is the result of negating 0
, so it is zero of type int
.signbit(-0)
produces 0.If you do signbit(-0.)
instead:
0.
is a constant of type double
.-0.
is the result of negating 0.
, so it is a negative zero of type double
.signbit(-0.)
produces 1.The key is that -0
negates an integer type, and the integer types typically do not encode negative zero as distinct from a positive zero. When an integer zero is converted to floating point, the result is a simple (positive) zero. However, -0.
negates a floating-point type, and the floating-point types do encode negative zero distinctly from positive zero.
In two's complement, which is by far the most common representation for signed integers these days, there is no such thing as negative zero. -0 == +0
in all cases, even bitwise. So by the time the macro's code processes it, even if it includes ((float) -0)
, the sign is already gone.
If you want to test, you might have better luck with something like signbit(-0.0)
or signbit(-1.0 * 0)
. Since you're not converting from an integer at that point, the number should still have a sign.
It doesn't. The signbit
macro returns the literal signbit of a floating-point datum. Note the text: "if the sign of the argument" is negative, not "if the argument" is negative.
Footnote 236 in the C standard clarifies:
The signbit macro reports the sign of all values, including infinities, zeros, and NaNs.
Is this a hypothetical question, or do you have a buggy implementation?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With