Comparing floating point numbers(double, float) in .net directly for equality is not safe. A double value in a variable may change over time by very small amount. For example, if you set the variable num(double) to 0.2 of an object, after some time that object waited in the memory, you may find that num became 0.1999999999999. So num == 0.2 will be false in this case. My solution to this problem is to create a property to round the number:
double Num
{
get{ return Math.Round(num, 1); }
}
After the get of Num is called and result is returned, can this returned number change to 0.19 again at the time of comparison(Num == 0.2)? It is not likely but is it guaranteed?
To compare two floating point values, we have to consider the precision in to the comparison. For example, if two numbers are 3.1428 and 3.1415, then they are same up to the precision 0.01, but after that, like 0.001 they are not same.
And the reason the comparison succeeds with 1.5 is that 1.5 can be represented exactly as a float and as a double ; it has a bunch of zeros in its low bits, so when the promotion adds zeros the result is the same as the double representation.
We use the operators <, <=, >= and > to compare float and double values. With integers, we can use !=
There are two floating point primitive types. Data type float is sometimes called "single-precision floating point". Data type double has twice as many bits and is sometimes called "double-precision floating point".
No, it is not guaranteed.
From MSDN - Math.Round
:
The behavior of this method follows IEEE Standard 754, section 4. This kind of rounding is sometimes called rounding to nearest, or banker's rounding. It minimizes rounding errors that result from consistently rounding a midpoint value in a single direction.
(emphasis mine)
Point is - it minimizes, not ensures.
When comparing floating point types, you should always test against an epsilon - a minimum value beyond which you don't care.
Example adapted from here:
double dValue = 0.2;
var diff = Math.Abs(num - dValue);
if( diff < 0.0000001 ) // need some min threshold to compare floating points
{
// treat as equal
}
Recommended reading: What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Whether you believe it or not, this is intended behaviour, and conforms to some IEEE standard.
Its not possible to represent an analogue every-day value such as a massive number or a small fraction with complete fidelity in a single binary representation. The floating point numbers in .NET, such as float or double do their best to minimize error when you assign numbers to them, so when you assigned 0.2 to the variable, the language did its best to choose the representation with the smallest error.
Its not that the number somehow degrades in memory - this is a deliberate step. If you are comparing floating point numbers, you should always allow a region either side of your comparison that is acceptable. Your representation of 0.2 is close to a very large number of decimal places. Is this good enough for your application? It looks glaring to your eyes, but actually is a very small error. When comparing doubles and floats, (to integers or to each other), you should always consider what is the acceptable precision, and accept a range either side of your expected result.
You can also choose to use other types, like decimal that has extremely good precision on decimal places - but is also very large compared to floats and doubles.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With