I was going through The C programming Language by K&R. Here in a statement to print a double variable it is written
printf("\t%g\n", sum += atof(line));
where sum is declared as double. Can anybody please help me out when to use %g in case of double or in case of float and whats the difference between %g and %f.
%g. It is used to print the decimal floating-point values, and it uses the fixed precision, i.e., the value after the decimal in input would be exactly the same as the value in the output.
According to most sources I've found, across multiple languages that use printf specifiers, the %g specifier is supposed to be equivalent to either %f or %e - whichever would produce shorter output for the provided value.
%f. a floating point number for floats. %u. int unsigned decimal.
The short answer is that it has no impact on printf , and denotes use of float or double in scanf . For printf , arguments of type float are promoted to double so both %f and %lf are used for double . For scanf , you should use %f for float and %lf for double .
They are both examples of floating point input/output.
%g and %G are simplifiers of the scientific notation floats %e and %E.
%g will take a number that could be represented as %f (a simple float or double) or %e (scientific notation) and return it as the shorter of the two.
The output of your print statement will depend on the value of sum.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With