Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Curious Behavior When Doing Addition on Nullable Floats

Tags:

c#

I've noticed something very odd when working with addition of nullable floats. Take the following code:

float? a = 2.1f;
float? b = 3.8f;
float? c = 0.2f;
float? result = 
(a == null ? 0 : a)
+ (b == null ? 0 : b)
+ (c == null ? 0 : c);
float? result2 = 
(a == null ? 0 : a.Value)
+ (b == null ? 0 : b.Value)
+ (c == null ? 0 : c.Value);

result is 6.099999 whereas result2 is 6.1. I'm lucky to have stumbled on this at all because if I change the values for a, b, and c the behavior typically appears correct. This may also happen with other arithmetic operators or other nullable value types, but this is case I've been able to reproduce. What I don't understand is why the implicit cast to float from float? didn't work correctly in the first case. I could perhaps understand if it tried to get an int value given that the other side of the conditional is 0, but that doesn't appear to be what's happening. Given that result only appears incorrect for certain combinations of floating values, I'm assuming this is some kind of rounding problem with multiple conversions (possibly due to boxing/unboxing or something).

Any ideas?

like image 505
daveaglick Avatar asked Aug 28 '14 14:08

daveaglick


1 Answers

See comments by @EricLippert.

ANYTHING is permitted to change the result -- let me emphasize that again ANYTHING WHATSOEVER including phase of the moon is permitted to change whether floats are computed in 32 bit accuracy or higher accuracy. The processor is always allowed to for any reason whatsoever decide to suddenly start doing floating point arithmetic in 80 bits or 128 bits or whatever it chooses so long as it is more than or equal to 32 bit precision. See (.1f+.2f==.3f) != (.1f+.2f).Equals(.3f) Why? for more details.

Asking what in particular in this case caused the processor to decide to use higher precision in one case and not in another is a losing game. It could be anything. If you require accurate computations in decimal figures then use the aptly named decimal type. If you require repeatable computations in floats then C# has two mechanisms for forcing the processor back to 32 bits. (1) explicitly cast to (float) unnecessarily, or (2) store the result in a float array element or float field of a reference type.

The behavior here has nothing to do with the Nullable type. It's a matter of floats never being exact and being calculated in different precision on the whims of the processor.

In general, this comes down to the advice that if accuracy is important, your best bet is to use something other than float (or use the techniques described by @EricLippert to force the processor to use 32 bit precision).

The answer from Eric Lippert on linked question is also helpful in understanding what's going on.

like image 94
3 revs, 2 users 57% Avatar answered Oct 21 '22 00:10

3 revs, 2 users 57%