Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Haskell sets (0/0) as qnan

I noticed that Haskell (ghci 7.10.2 from Haskell Platform on Windows) flips the sign on QNAN (0/0 :: Double) from what I've seen in C++ (tested MSVS C++ 2013 and cygwin gcc 4.9.2). Haskell produces the bit pattern 0xfff8000000000000 for (0/0) (and -(0/0) produces 0x7ff8...). This is backwards from C++ implementations seem to do.

Here's a test program to illustrate:

import Data.Word
import Unsafe.Coerce
import Text.Printf

dblToBits :: Double -> Word64
dblToBits = unsafeCoerce

test :: Double -> IO ()
test d = putStrLn $ printf "%12f       0x%x" d (dblToBits d)

go = do
  test (0/0)
  test (-(0/0))
  test (1/0)
  test (-(1/0))

This gives the output:

      NaN       0xfff8000000000000  <- I expect 0x7F...?
      NaN       0x7ff8000000000000  <- I expect 0xFF...?
 Infinity       0x7ff0000000000000
-Infinity       0xfff0000000000000

Note, the infinities work out okay, but the NaN's seem flipped.

  • Is this part of the undefined semantics of NaN in Haskell? I.e. (0/0) means ghc can use whatever NaN pattern they want? Then do we have a precise way in Haskell of specifying QNAN or SNAN in floating point without resorting to special IEEE libraries 4? I am writing an assembler for a piece of hardware that might be picky about it's flavor of NaN.

  • Am I getting burned by unsafeCoerce? I have no easy way in Haskell to convert from float to bits and back.

REFERENCES:

  1. MSVS 2013. The C++ std::numeric_limits<double>::quiet_NaN() from <limits> gives 0x7ff8000000000000. Also tested on cygwin gcc 4.9.2
  2. std::numeric_limits::quiet_NaN. States that any meaning of the sign bit is implementation defined. Does Haskell have a similar rule regarding this?
  3. Perl semantics are consistent with MSV C++
  4. Possible Haskell library for IEEE
  5. Slightly related question uses the same unsafeCoerce non-sense that I fell back to.
like image 411
Tim Avatar asked Dec 10 '15 00:12

Tim


1 Answers

You're asking too much from your NaNs. According to the IEEE standard, the sign bit on a NaN can be anything. So the compiler, processor, or floating-point libraries are free to make any choices they want to, and you will get different results on different compilers, processors, and libraries.

In particular, with a program like this, constant folding may mean that the operations are carried out by the compiler instead of in the target environment, depending on how the compiler is run. The compiler may use native floating-point instructions or it may use something like GMP or MPFR instead. This isn't uncommon. Since the IEEE standard says nothing about sign bits, you're going to end up with different values for different implementations. I would not be entirely surprised if you could demonstrate that the values changed when you turned optimizations on or off, and that's not including things like -ffast-math.

As an example of an optimization, the compiler knows that you are computing a NaN, and maybe it decides not to bother flipping the sign bit afterwards. This all happens through constant propagation. Another compiler doesn't do that kind of analysis, and so it emits an instruction to flip the sign bit, and the folks who made your processor don't make that operation behave differently for NaNs.

In short, don't try to make sense of the sign bit on a NaN.

What exactly are you trying to accomplish here?

like image 195
Dietrich Epp Avatar answered Sep 25 '22 22:09

Dietrich Epp