Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the difference between %g and %f in C?

Tags:

c

variables

I was going through The C programming Language by K&R. Here in a statement to print a double variable it is written

printf("\t%g\n", sum += atof(line)); 

where sum is declared as double. Can anybody please help me out when to use %g in case of double or in case of float and whats the difference between %g and %f.

like image 544
Shashi Bhushan Avatar asked May 06 '11 14:05

Shashi Bhushan


People also ask

What does %g mean in C?

%g. It is used to print the decimal floating-point values, and it uses the fixed precision, i.e., the value after the decimal in input would be exactly the same as the value in the output.

What is %g in printf?

According to most sources I've found, across multiple languages that use printf specifiers, the %g specifier is supposed to be equivalent to either %f or %e - whichever would produce shorter output for the provided value.

What is %f in C language?

%f. a floating point number for floats. %u. int unsigned decimal.

What is the difference between %F and %LF in C?

The short answer is that it has no impact on printf , and denotes use of float or double in scanf . For printf , arguments of type float are promoted to double so both %f and %lf are used for double . For scanf , you should use %f for float and %lf for double .


1 Answers

They are both examples of floating point input/output.

%g and %G are simplifiers of the scientific notation floats %e and %E.

%g will take a number that could be represented as %f (a simple float or double) or %e (scientific notation) and return it as the shorter of the two.

The output of your print statement will depend on the value of sum.

like image 161
Daniel Nill Avatar answered Sep 30 '22 19:09

Daniel Nill