I'm working on a function optimization routine (a variant of the Nelder-Mead algorithm) which fails to converge in very specific conditions.
I've identified that a float
variable, let's call it a
, is being assigned the mean between a
and another variable b
that differs from it by a bit only.
More precisely, the values of each variables are as follows:
float a = 25.9735966f; // 41CFC9ED
float b = 25.9735947f; // 41CFC9EC
And now I'm trying to assign to a
the mean between a
and b
:
a = 0.5 * (a+b);
When I write this code in a test program, I get the result I want, namely 25.9735947
. But in the debugger of my original library code I see that the value of a remains 25.9735966
. I'm pretty certain that I have the same compiler flags on both programs. Is there any reason why this single-precision calculation would yield different results?
UPDATE
As @PascalCuoq requested, here is what I think is the assembly for the line in question. The line is doing a few other things though and I'm not sure where the multiplication happens.
.loc 1 53 0 discriminator 2
movl -60(%rbp), %eax
cltq
salq $3, %rax
addq -88(%rbp), %rax
movq (%rax), %rax
movl -44(%rbp), %edx
movslq %edx, %rdx
salq $2, %rdx
leaq (%rax,%rdx), %rcx
movl -44(%rbp), %eax
cltq
salq $2, %rax
addq -72(%rbp), %rax
movl -60(%rbp), %edx
movslq %edx, %rdx
salq $3, %rdx
addq -88(%rbp), %rdx
movq (%rdx), %rdx
movl -44(%rbp), %esi
movslq %esi, %rsi
salq $2, %rsi
addq %rsi, %rdx
movss (%rdx), %xmm1
movl -52(%rbp), %edx
movslq %edx, %rdx
salq $3, %rdx
addq -88(%rbp), %rdx
movq (%rdx), %rdx
movl -44(%rbp), %esi
movslq %esi, %rsi
salq $2, %rsi
addq %rsi, %rdx
movss (%rdx), %xmm0
addss %xmm1, %xmm0
movss .LC6(%rip), %xmm1
mulss %xmm1, %xmm0
movss %xmm0, (%rax)
movl (%rax), %eax
movl %eax, (%rcx)
CLARIFICATION
My code is a ripoff variant of the Nelder-Mead code from Numerical Recipes. The offending line is this one:
p[i][j]=psum[j]=0.5*(p[i][j]+p[ilo][j]);
In this line, p[i][j] == 25.9735966f
and p[ilo][j] == 25.9735947f
. The resulting value in p[i][j]
is 25.9735966f
.
I just re-read the relevant part of IEEE 754-1985, assuming that your floating-point implementation conforms to that standard. The only thing that comes to mind is that there are different rounding modes in your two environments. These are the possibilities:
25.9735947f
+INF
=> 25.9735966f
0
=> 25.9735947f
-INF
=> 25.9735947f
So the only possibility is that your debugging environment has rounding mode towards +INF. To me, there is no other plausible explanation.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With