Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why does GDB evaluate floating-point arithmetic differently from C++?

I've encountered something a little confusing while trying to deal with a floating-point arithmetic problem.

First, the code. I've distilled the essence of my problem into this example:

#include <iostream>
#include <iomanip>

using namespace std;
typedef union {long long ll; double d;} bindouble;

int main(int argc, char** argv) {
    bindouble y, z, tau, xinum, xiden;
    y.d = 1.0d;
    z.ll = 0x3fc5f8e2f0686eee; // double 0.17165791262311053
    tau.ll = 0x3fab51c5e0bf9ef7; // double 0.053358253178712838
    // xinum = double 0.16249854626123722 (0x3fc4ccc09aeb769a)
    xinum.d = y.d * (z.d - tau.d) - tau.d * (z.d - 1);
    // xiden = double 0.16249854626123725 (0x3fc4ccc09aeb769b)
    xiden.d = z.d * (1 - tau.d);
    cout << hex << xinum.ll << endl << xiden.ll << endl;
}

xinum and xiden should have the same value (when y == 1), but because of floating-point roundoff error they don't. That part I get.

The question came up when I ran this code (actually, my real program) through GDB to track down the discrepancy. If I use GDB to reproduce the evaluations done in the code, it gives a different result for xiden:

$ gdb mathtest
GNU gdb (Gentoo 7.5 p1) 7.5
...
This GDB was configured as "x86_64-pc-linux-gnu".
...
(gdb) break 16
Breakpoint 1 at 0x4008ef: file mathtest.cpp, line 16.
(gdb) run
Starting program: /home/diazona/tmp/mathtest 
...
Breakpoint 1, main (argc=1, argv=0x7fffffffd5f8) at mathtest.cpp:16
16          cout << hex << xinum.ll << endl << xiden.ll << endl;
(gdb) print xiden.d
$1 = 0.16249854626123725
(gdb) print z.d * (1 - tau.d)
$2 = 0.16249854626123722

You'll notice that if I ask GDB to calculate z.d * (1 - tau.d), it gives 0.16249854626123722 (0x3fc4ccc09aeb769a), whereas the actual C++ code that calculates the same thing in the program gives 0.16249854626123725 (0x3fc4ccc09aeb769b). So GDB must be using a different evaluation model for floating-point arithmetic. Can anyone shed some more light on this? How is GDB's evaluation different from my processor's evaluation?

I did look at this related question asking about GDB evaluating sqrt(3) to 0, but this shouldn't be the same thing because there are no function calls involved here.

like image 796
David Z Avatar asked Nov 12 '12 23:11

David Z


1 Answers

Could be because the x86 FPU works in registers to 80 bits accuracy, but rounds to 64 bits when the value is stored to memory. GDB will be storing to memory on every step of the (interpreted) computation.

like image 139
Bryan Avatar answered Sep 17 '22 18:09

Bryan