A few days ago, I was trying to subtract 10000 from std::numeric_limits<float>::max()
and I just figured out that the value didn't change at all, no matter what value I was subtracting. In fact, it seems like all floating point types have this behavior.
For instance (on g++ and msvc), this one doesn't pass (good) :
int i = std::numeric_limits<int>::max();
assert(i == i - 10000); // Doesn't pass
But this one does (?) :
float f = std::numeric_limits<float>::max();
assert(f == f - 10000.f); // Pass
I even tried to assign the maximum value directly (in this case 3.40282e+38), but it doesn't seem to change anything. Also, it seems to do the exact same thing with any high enough values. Could someone explain to me why it does so? Thanks.
Floating point numbers are not precise like int
. The amount you subtracted is way too small to make a difference in the significand and it just gets lost in the precision. std::numeric_limits<float>::max()
is insanely large (3.402823e+38).
If you do:
float f = std::numeric_limits<float>::max();
assert(f == f - f/2.f);
I'm sure it will fail.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With