Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How does OCaml manage float addition?

In Java, if you do 0.2 + 0.01, you will get 0.21000000000000002

This is due to IEEE 754.

However, in OCaml, if you do 0.2 +. 0.01, then you get the correct result 0.21.

I think OCaml also obeys IEEE 754 for floats, why OCaml can give correct result while Java cannot?

like image 900
Jackson Tale Avatar asked Mar 27 '26 05:03

Jackson Tale


2 Answers

Which one is "correct" in this case? From the view of the floating point arithmetics, Java is correct here. Anyway,

The values in OCaml toplevel is printed by genprintval.ml and there float values are printed by print_float, which uses string_of_float. Its definition is in pervasives.ml:

let string_of_float f = valid_float_lexem (format_float "%.12g" f)

As you see here the floats are printed using the printf format "%.12g". Things smaller than 10^{-12} are simply discarded. That's why you see the "incorrect" answer 0.21. If you upsize the precision, you have the same output as Java:

# Printf.sprintf "%.20g" (0.2 +. 0.01);;
- : string = "0.21000000000000002"
like image 85
camlspotter Avatar answered Mar 29 '26 19:03

camlspotter


OCaml, for the type that it calls float, uses the double type of the underlying C/Unix platform, which is usually defined by that platform as IEEE 754's binary64 format.

In OCaml, the conversion to decimal is done in the old-fashioned way, with a fixed number of digits (camlspotter has already dug up the format, which is %.12g, with the same meaning in OCaml that this format has in C).

Among modern languages (Java, Javascript, Ruby), the fashion is to convert to decimal by emitting exactly as many digits required for the decimal representation to convert back to the original floating-point number if converted back in the other direction. So in Java 0.21 is printed for and only for the double nearest to 0.21, which is not the rational 21/100 as this number is not exactly representable as a binary floating-point number.

One method is not better than the other. They both have surprising side-effects for the unwarned developer. In particular, the Java conversion method has lead to many “Why does the value of my float change when I convert it to double?” questions on StackOverflow (Answer: it doesn't, but (double)0.1f is printed with many additional digits after 0.100000 because the type double contains more values than float).

Anyway, both OCaml and Java compute the same floating-point number for 0.2 + 0.01, because they both closely follow IEEE 754. They just print them differently. OCaml prints a fixed number of digits that does not go far enough to show that the number is neither 21/100 nor the double-precision floating-point representation closest to 21/100. Java prints enough digits to show that the number is not the closest to 21/100.

like image 32
Pascal Cuoq Avatar answered Mar 29 '26 18:03

Pascal Cuoq



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!