Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What's the difference between printf("%.d", 0) and printf("%.1d", 0)?

I'm working on recoding printf and I'm blocked for a moment now with the precision flag. So I read that the default precision when type conversion specifier is d is 1:

default precison

So I supposed that there is no difference between %.d and %.1d, but when I test:

printf(".d =%.d, .1d= %.1d", 0, 0);

I do find one:

.d =, .1d= 0
like image 809
yuva Avatar asked Jun 04 '20 11:06

yuva


3 Answers

If you use . after % without specifying the precision, it is set to zero.

From the printf page on cppreference.com:

. followed by integer number or *, or neither that specifies precision of the conversion. In the case when * is used, the precision is specified by an additional argument of type int. If the value of this argument is negative, it is ignored. If neither a number nor * is used, the precision is taken as zero.

It defaults to 1 if you use %d (without .):

printf("d = %d, 1d= %1d", 0, 0);
# Output:   d = 0, 1d= 0
like image 199
Aziz Avatar answered Oct 20 '22 02:10

Aziz


The C18 standard - ISO/IEC 9899:2018 - (emphasize mine) states:

"An optional precision that gives the minimum number of digits to appear for the d, i, o, u, x, and X conversions, the number of digits to appear after the decimal-point character for a, A, e, E, f, and F conversions, the maximum number of significant digits for the g and G conversions, or the maximum number of bytes to be written for s conversions. The precision takes the form of a period (.) followed either by an asterisk * (described later) or by an optional non negative decimal integer; if only the period is specified, the precision is taken as zero. If a precision appears with any other conversion specifier, the behavior is undefined."

Source: C18, §7.21.6.1/4

Means %.d is equal to %.0d and with that different to %.1d.


Furthermore:

"d,i - The int argument is converted to signed decimal in the style [-]dddd. The precision specifies the minimum number of digits to appear; if the value being converted can be represented in fewer digits, it is expanded with leading zeros. The default precision is 1. The result of converting a zero value with a precision of zero is no characters."

Source: C18, §7.21.6.1/8

That means if you convert a 0 value by using %.d in a printf() call, the result is guaranteed to be no characters printed (which matches to your test experience).

like image 31
RobertS supports Monica Cellio Avatar answered Oct 20 '22 01:10

RobertS supports Monica Cellio


When the precision is set to zero or its value is omitted like

printf( "%.d", x )'

when according to the description of the conversion specifiers d and i (7.21.6.1 The fprintf function)

The int argument is converted to signed decimal in the style [−]dddd. The precision specifies the minimum number of digits to appear; if the value being converted can be represented in fewer digits, it is expanded with leading zeros. The default precision is 1. The result of converting a zero value with a precision of zero is no characters.

Here is a demonstrative program

#include <stdio.h>

int main(void) 
{
    printf( "%.d\n", 0 );
    printf( "%.0d\n", 0 );
    printf( "%.1d\n", 0 );      
    return 0;
}

Its output is

0

That is when the precision is equal to 0 or its value is absent then if 0 is specified as an argument when nothing will be outputted.

like image 1
Vlad from Moscow Avatar answered Oct 20 '22 01:10

Vlad from Moscow