Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

the precision of printf with specifier "%g"

Tags:

Can anybody explain me how the [.precision] in printf works with specifier "%g"? I'm quite confused by the following output:

double value = 3122.55; printf("%.16g\n", value); //output: 3122.55 printf("%.17g\n", value); //output: 3122.5500000000002 

I've learned that %g uses the shortest representation.

But the following outputs still confuse me

printf("%.16e\n", value); //output: 3.1225500000000002e+03 printf("%.16f\n", value); //output: 3122.5500000000001819 printf("%.17e\n", value); //output: 3.12255000000000018e+03 printf("%.17f\n", value); //output: 3122.55000000000018190 

My question is: why %.16g gives the exact number while %.17g can't?

It seems 16 significant digits can be accurate. Could anyone tell me the reason?

like image 520
cssmlulu Avatar asked Jun 05 '15 05:06

cssmlulu


People also ask

What is %g in printf?

According to most sources I've found, across multiple languages that use printf specifiers, the %g specifier is supposed to be equivalent to either %f or %e - whichever would produce shorter output for the provided value.

What is %g specifier?

The general ("G") format specifier converts a number to the more compact of either fixed-point or scientific notation, depending on the type of the number and whether a precision specifier is present. The precision specifier defines the maximum number of significant digits that can appear in the result string.

What does %g mean C?

%g. It is used to print the decimal floating-point values, and it uses the fixed precision, i.e., the value after the decimal in input would be exactly the same as the value in the output.

How do I specify precision in printf?

You can specify a ``precision''; for example, %. 2f formats a floating-point number with two digits printed after the decimal point. You can also add certain characters which specify various options, such as how a too-narrow field is to be padded out to its field width, or what type of number you're printing.


1 Answers

%g uses the shortest representation.

Floating-point numbers usually aren't stored as a number in base 10, but 2 (performance, size, practicality reasons). However, whatever the base of your representation, there will always be rational numbers that will not be expressible in some arbitrary size limit for the variable to store them.

When you specify %.16g, you're saying that you want the shortest representation of the number given with a maximum of 16 significant digits.

If the shortest representation has more than 16 digits, printf will shorten the number string by cutting cut the 2 digit at the very end, leaving you with 3122.550000000000, which is actually 3122.55 in the shortest form, explaining the result you obtained.

In general, %g will always give you the shortest result possible, meaning that if the sequence of digits representing your number can be shortened without any loss of precision, it will be done.

To further the example, when you use %.17g and the 17th decimal place contains a value different from 0 (2 in particular), you ended up with the full number 3122.5500000000002.

My question is: why %.16g gives the exact number while %.17g can't?

It's actually the %.17g which gives you the exact result, while %.16g gives you only a rounded approximate with an error (when compared to the value in memory).

If you want a more fixed precision, use %f or %F instead.

like image 73
user35443 Avatar answered Dec 27 '22 20:12

user35443