I came across some code with a line looking like:
fprintf(fd, "%4.8f", ptr->myFlt);
Not working with C++ much these days, I read the doc on printf and its ilk, and learned that in this case 4 is the "width", and 8 is the "precision". Width was defined as the minimum number of spaces occupied by the output, padding with leading blanks if need be.
That being the case, I can't understand what the point of a template like "%4.8f" would be, since the 8 (zero-padded if necessary) decimals after the point would already ensure that the width of 4 was met and exceeded. So, I wrote a little program, in Visual C++:
// Formatting width test
#include "stdafx.h"
int _tmain(int argc, _TCHAR* argv[])
{
printf("Need width when decimals are smaller: >%4.1f<\n", 3.4567);
printf("Seems unnecessary when decimals are greater: >%4.8f<\n", 3.4567);
printf("Doesn't matter if argument has no decimal places: >%4.8f<\n", (float)3);
return 0;
}
which gives the following output:
Need width when decimals are smaller: > 3.5<
Seems unnecessary when decimals are greater: >3.45670000<
Doesn't matter if argument has no decimal places: >3.00000000<
In the first case, the precision is less than width specified, and in fact a leading space is added. When the precision is greater, however, the width seems redundant.
Is there a reason for a format like that?
The printf precision specifiers set the maximum number of characters (or minimum number of integer digits) to print. A printf precision specification always begins with a period (.) to separate it from any preceding width specifier. Directly, through a decimal digit string.
The width modifier is used to specify the minimum number of positions that the output will take. If the user does not mention any width then the output will take just enough positions required for the output data. The first line of printf () width is not mentioned so 10 is displayed as it is.
No, there is no such printf width specifier to print floating-point with maximum precision. Let me explain why. The maximum precision of float and double is variable, and dependent on the actual value of the float or double. Recall float and double are stored in sign.exponent.mantissa format.
Clarification: Its format can be given as “. m”, where m specifies the number of decimal digits when no precision modifier is specified, printf prints six decimal positions. 4. What will the given code result in printf (“n you are”awesome ” “);?
The width format specifier only affects the output if the total width of the printed number is less than the specified width. Obviously, this can never happen when the precision is set greater than or equal to the width. So, the width specification is useless in this case.
Here's an article from MSDN; the last sentence explains it.
A nonexistent or small field width does not cause the truncation of a field; if the result of a conversion is wider than the field width, the field expands to contain the conversion result.
Perhaps a mistake of the programmer? Perhaps they swapped %8.4f
or they actually intended %12.8f
or even %012.8f
See codepad sample:
#include <stdio.h>
int main()
{
printf("Seems unnecessary when decimals are greater: >%4.8f<\n", 3.4567);
printf("Seems unnecessary when decimals are greater: >%8.4f<\n", 3.4567);
printf("Seems unnecessary when decimals are greater: >%12.4f<\n", 3.4567);
printf("Seems unnecessary when decimals are greater: >%012.4f<\n", 3.4567);
return 0;
}
Output
Seems unnecessary when decimals are greater: >3.45670000<
Seems unnecessary when decimals are greater: > 3.4567<
Seems unnecessary when decimals are greater: > 3.4567<
Seems unnecessary when decimals are greater: >0000003.4567<
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With