In constexpr: Introduction, the speaker mentioned "Compile-time floating point calculations might not have the same results as runtime calculations":
And the reason is related to "cross-compiling".
Honestly, I can't get the idea clearly. IMHO, different platforms may also have different implementation of integers.
Why does it only affect floating points? Or I miss something?
Floating-point decimal values generally do not have an exact binary representation due to how the CPU represents floating point data. For this reason, you may experience a loss of precision, and some floating-point operations may produce unexpected results.
Because floating-point numbers have a limited number of digits, they cannot represent all real numbers accurately: when there are more digits than the format allows, the leftover ones are omitted - the number is rounded.
Why does it only affect floating points?
For the standard doesn't impose restrictions on floating-point operation accuracy.
As per expr.const, emphasis mine:
[ Note: Since this document imposes no restrictions on the accuracy of floating-point operations, it is unspecified whether the evaluation of a floating-point expression during translation yields the same result as the evaluation of the same expression (or the same operations on the same values) during program execution. [ Example:
bool f() { char array[1 + int(1 + 0.2 - 0.1 - 0.1)]; // Must be evaluated during translation int size = 1 + int(1 + 0.2 - 0.1 - 0.1); // May be evaluated at runtime return sizeof(array) == size; }
It is unspecified whether the value of
f()
will betrue
orfalse
. — end example ]
— end note ]
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With