#include <stdio.h>
#include <float.h>
int main()
{
printf("%f\n", FLT_MAX);
}
Output from GNU:
340282346638528859811704183484516925440.000000
Output from Visual Studio:
340282346638528860000000000000000000000.000000
Do the C and C++ standards allow both results? Or do they mandate a specific result?
Note that FLT_MAX = 2^128-2^104 = 340282346638528859811704183484516925440
.
I think the relevant part of the C99 standard is the "Recommended practice" from 7.19.6.1 p.13:
For
e
,E
,f
,F
,g
, andG
conversions, if the number of significant decimal digits is at mostDECIMAL_DIG
, then the result should be correctly rounded. If the number of significant decimal digits is more thanDECIMAL_DIG
but the source value is exactly representable withDECIMAL_DIG
digits, then the result should be an exact representation with trailing zeros. Otherwise, the source value is bounded by two adjacent decimal strings L < U, both havingDECIMAL_DIG
significant digits; the value of the resultant decimal string D should satisfy L <= D <= U, with the extra stipulation that the error should have a correct sign for the current rounding direction.
My impression is that this allows some leeway in what may be printed in this case; so my conclusion is that both VS and GCC are compliant here.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With