Can anybody explain me how the [.precision]
in printf works with specifier "%g"? I'm quite confused by the following output:
double value = 3122.55; printf("%.16g\n", value); //output: 3122.55 printf("%.17g\n", value); //output: 3122.5500000000002
I've learned that %g
uses the shortest representation.
But the following outputs still confuse me
printf("%.16e\n", value); //output: 3.1225500000000002e+03 printf("%.16f\n", value); //output: 3122.5500000000001819 printf("%.17e\n", value); //output: 3.12255000000000018e+03 printf("%.17f\n", value); //output: 3122.55000000000018190
My question is: why %.16g
gives the exact number while %.17g
can't?
It seems 16 significant digits can be accurate. Could anyone tell me the reason?
According to most sources I've found, across multiple languages that use printf specifiers, the %g specifier is supposed to be equivalent to either %f or %e - whichever would produce shorter output for the provided value.
The general ("G") format specifier converts a number to the more compact of either fixed-point or scientific notation, depending on the type of the number and whether a precision specifier is present. The precision specifier defines the maximum number of significant digits that can appear in the result string.
%g. It is used to print the decimal floating-point values, and it uses the fixed precision, i.e., the value after the decimal in input would be exactly the same as the value in the output.
You can specify a ``precision''; for example, %. 2f formats a floating-point number with two digits printed after the decimal point. You can also add certain characters which specify various options, such as how a too-narrow field is to be padded out to its field width, or what type of number you're printing.
%g
uses the shortest representation.
Floating-point numbers usually aren't stored as a number in base 10
, but 2
(performance, size, practicality reasons). However, whatever the base of your representation, there will always be rational numbers that will not be expressible in some arbitrary size limit for the variable to store them.
When you specify %.16g
, you're saying that you want the shortest representation of the number given with a maximum of 16
significant digits.
If the shortest representation has more than 16
digits, printf
will shorten the number string by cutting cut the 2
digit at the very end, leaving you with 3122.550000000000
, which is actually 3122.55
in the shortest form, explaining the result you obtained.
In general, %g
will always give you the shortest result possible, meaning that if the sequence of digits representing your number can be shortened without any loss of precision, it will be done.
To further the example, when you use %.17g
and the 17
th decimal place contains a value different from 0
(2
in particular), you ended up with the full number 3122.5500000000002
.
My question is: why
%.16g
gives the exact number while%.17g
can't?
It's actually the %.17g
which gives you the exact result, while %.16g
gives you only a rounded approximate with an error (when compared to the value in memory).
If you want a more fixed precision, use %f
or %F
instead.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With