Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why 10/3 it's exact in C? [duplicate]

Tags:

c

intel

ieee-754

Take a look on this code. 10/3 return 3.3333332538604736328125000 and when I multiply by 3 in a calcutor i get 9.99, but if do the same by the code i get exactly 10.00. How it's posible ?

#include <stdlib.h>
#include <stdio.h>

int main() {

    float v = 10.f/3.f;
    float test = v*3.f;
    printf("10/3 => %25.25f \n (10/3)*3 => %25.25f\n",v,test);
    return 0;
}

This is the assembly code without printf, compiled using default gcc 7.2.1 parameters:

0000000000400497 <main>:
  400497:       55                      push   rbp
  400498:       48 89 e5                mov    rbp,rsp
  40049b:       f3 0f 10 05 b1 00 00    movss  xmm0,DWORD PTR [rip+0xb1]        # 400554 <_IO_stdin_used+0x4>
  4004a2:       00 
  4004a3:       f3 0f 11 45 fc          movss  DWORD PTR [rbp-0x4],xmm0
  4004a8:       f3 0f 10 4d fc          movss  xmm1,DWORD PTR [rbp-0x4]
  4004ad:       f3 0f 10 05 a3 00 00    movss  xmm0,DWORD PTR [rip+0xa3]        # 400558 <_IO_stdin_used+0x8>
  4004b4:       00 
  4004b5:       f3 0f 59 c1             mulss  xmm0,xmm1
  4004b9:       f3 0f 11 45 f8          movss  DWORD PTR [rbp-0x8],xmm0
  4004be:       b8 00 00 00 00          mov    eax,0x0
  4004c3:       5d                      pop    rbp
  4004c4:       c3                      ret    
  4004c5:       66 2e 0f 1f 84 00 00    nop    WORD PTR cs:[rax+rax*1+0x0]
  4004cc:       00 00 00 
  4004cf:       90                      nop

I think mulss is rounded by a CPU feature.

For note, 10/3 in the GNU BC program returns 3.3333333333333333333333 ( *3 => 9.9999) and in SciLab returns 3.3333333333333334813631 ( *3 => 10).

like image 856
Amanda Osvaldo Avatar asked Nov 30 '17 20:11

Amanda Osvaldo


2 Answers

You end up getting exactly 10 as a result because the representation happens to work out that way. I get the same on my implementation for both float and double.

Let's look at an example using double:

If we print out 10./3. in hexadecimal floating point notation using %a, we get this:

0x1.aaaaaaaaaaaabp+1

This matches up with the IEEE754 double representation 0x401aaaaaaaaaaaab.

The above number normalized is:

0x3.5555555555558

In binary:

11.0101010101010101010101010101010101010101010101011

To keep things simple, let's add three times instead of multiplying by 3:

     11.0101010101010101010101010101010101010101010101011
+    11.0101010101010101010101010101010101010101010101011
---------------------------------------------------------
    110.1010101010101010101010101010101010101010101010111
+    11.0101010101010101010101010101010101010101010101011
---------------------------------------------------------
   1010.0000000000000000000000000000000000000000000000000

Which is exactly 10.

EDIT:

Looks like I managed to botch the math on the last few digits. The actual sum:

     11.0101010101010101010101010101010101010101010101011
+    11.0101010101010101010101010101010101010101010101011
---------------------------------------------------------
    110.1010101010101010101010101010101010101010101010110
+    11.0101010101010101010101010101010101010101010101011
---------------------------------------------------------
   1010.0000000000000000000000000000000000000000000000001

So it's not exactly 10, but off by the least significant bit.

I noticed a similar difference when using float.

10.f/3.f printed with %a:

0x1.aaaaaap+1

Normalized:

0x3.555554

In binary:

11.0101010101010101010101

Then we add:

     11.0101010101010101010101
+    11.0101010101010101010101
------------------------------
    110.1010101010101010101010
+    11.0101010101010101010101
------------------------------
   1001.1111111111111111111111

Again, off by the least significant bit.

As for how the actual result is rounded, that I can't say.

like image 147
dbush Avatar answered Oct 14 '22 23:10

dbush


The reason for the discrepancy between what you see in C and what you see in SciLab is that you are using single-precision floating-point values (float) in C, whereas SciLab seems to use double-precision values (double) by default.

You can see the difference here (just remove the f suffix from your numbers and put double instead of float).

like image 32
user541686 Avatar answered Oct 14 '22 22:10

user541686