The IEEE 754 standard defines the square root of negative zero as negative zero. This choice is easy enough to rationalize, but other choices, such as defining sqrt(-0.0)
as NaN
, can be rationalized too and are easier to implement in hardware. If the fear was that programmers would write if (x >= 0.0) then sqrt(x) else 0.0
and be bitten by this expression evaluating to NaN
when x
is -0.0
, then sqrt(-0.0)
could have been defined as +0.0
(actually, for this particular expression, the results would be even more consistent).
Is there a numerical algorithm in particular where having sqrt(-0.0)
defined as -0.0
simplifies the logic of the algorithm itself?
It was defined in the official floating point standard in 1985 (IEEE std. 754-1985) that sqrt(-0.0) = -0.0.
The 2008 revision of the same standard added a definition of the pow function. According to this definition, pow(x,y) can have a negative sign only if y is an odd integer. Hence, pow(-0.0, 3.0) = -0.0. While pow(-0.0, 0.5) = +0.0. In 2008, it was too late to change the definition of sqrt(-0.0) and therefore we have the unfortunate situation that the two functions give different results.
The sign of zero generally doesn't matter since zero and negative zero are equal. But it matters when you divide by it. So 1/sqrt(-0.0) gives -INF, while pow(-0.0,-0.5) gives +INF.
The decision of 1985 was probably just an observation of status quo. The Intel math coprocessor 8087 from 1980 had sqrt implemented in hardware and it gave sqrt(-0.0) = -0.0. Today, all PC processors have sqrt implemented in hardware, so it would be very difficult to change the standard. The problem is not so important that it is worthwhile making two different sqrt functions that differ only for negative zero. I don't know anything about the history prior to 1980. If anybody can trace the history further back please post a comment here.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With