Simple question: what is the correct bit-representation of the number 1.15507e-173, in double precision? Full question: how does one determine the correct parsing of this number?
Background: my question follows from this answer which shows two different bit-representations from three different parsers, namely
0x1c06dace8bda0ee0
and
0x1c06dace8bda0edf
and I'm wondering which parser has got it right.
Update Section 6.4.4.2 of the C99 specification says that for the C parser,
"...the result is either the nearest representable value, or the larger
or smaller representable value immediately adjacent to the nearest
representable value, chosen in an implementation-defined manner."
This implies that the parsed number need not be the nearest, nor even one of the two adjacent representable numbers. The same spec in 7.20.1.3 says that strtod() behaves essentially the same way as the built-in parser. Thanks to the answerers who pointed this out.
Also see this answer to a similar question, and this blog.
:= num1 = ImportString["\.1c\.06\.da\.ce\.8b\.da\.0e\.e0", "Real64", ByteOrdering->1] // First; := num2 = ImportString["\.1c\.06\.da\.ce\.8b\.da\.0e\.df", "Real64", ByteOrdering->1] // First; := SetPrecision[num1, Infinity]-numOr //N := numOr = SetPrecision[1.15507, Infinity] * 10^-173; -190 = -6.65645 10 := SetPrecision[num2, Infinity]-numOr //N -189 = -2.46118 10
Given that both deviate for the same side, it follows that the correct representation is the first one.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With