Take the following :
#include <stdio.h>
main() {
unsigned long long verybig = 285212672;
printf("Without variable : %llu\n", 285212672);
printf("With variable : %llu", verybig);
}
This is the output of the above program :
Without variable : 18035667472744448
With variable : 285212672
As you can see from the above, when printf
is passed the number as a constant, it prints some huge incorrect number, but when the value is first stored in a variable, printf
prints the correct number.
What is the reasoning behind this?
It is a way to tell the compiler what type of data is in a variable during taking input using scanf() or printing using printf(). Some examples are %c, %d, %f, etc. The format specifier in printf() and scanf() are mostly the same but there is some difference which we will see.
A double is a data type in C language that stores high-precision floating-point data or numbers in computer memory. It is called double data type because it can hold the double size of data compared to the float data type. A double has 8 bytes, which is equal to 64 bits in size.
The printf() method, in C, prints the value passed as the parameter to it, on the console screen. Syntax: printf("%X", variableOfXType); For an integer value, the X is replaced with type int.
An ANSI C piece of paper measures 432 × 559 mm or 17 × 22 inches. ANSI C is part of the American National Standards Institute series, with an aspect ratio of 1:1.2941.
Try 285212672ULL
; if you write it without suffixes, you'll find the compiler treats it as a regular integer. The reason it's working in a variable is because the integer is being cast up to an unsigned long long
in the assignment, so that the value passed to printf()
is the right type.
And before you ask, no, the compiler probably isn't smart enough to figure it out from the "%llu
" in the printf()
format string. That's a different level of abstraction. The compiler is responsible for the language syntax, printf()
semantics are not part of the syntax, it's a runtime library function (no different really from your own functions except that it's included in the standard).
Consider the following code for a 32-bit int and 64-bit unsigned long long system:
#include <stdio.h>
int main (void) {
printf ("%llu\n",1,2);
printf ("%llu\n",1ULL,2);
return 0;
}
which outputs:
8589934593
1
In the first case, the two 32-bit integers 1 and 2 are pushed on the stack and printf()
interprets that as a single 64-bit ULL value, 2 x 232 + 1. The 2
argument is being inadvertently included in the ULL value.
In the second, you actually push the 64-bit 1-value and a superfluous 32-bit integer 2
, which is ignored.
Note that this "getting out of step" between your format string and your actual arguments is a bad idea. Something like:
printf ("%llu %s %d\n", 0, "hello", 0);
is likely to crash because the 32-bit "hello"
pointer will be consumed by the %llu
and %s
will try to de-reference the final 0
argument. The following "picture" illustrates this (let's assume that cells are 32-bits and that the "hello" string is stored at 0xbf000000.
What you pass Stack frames What printf() uses
+------------+
0 | 0 | \
+------------+ > 64-bit value for %llu.
"hello" | 0xbf000000 | /
+------------+
0 | 0 | value for %s (likely core dump here).
+------------+
| ? | value for %d (could be anything).
+------------+
It's worth pointing out that some compilers give a useful warning for this case - for example, this is what GCC says about your code:
x.c: In function ‘main’:
x.c:6: warning: format ‘%llu’ expects type ‘long long unsigned int’, but argument 2 has type ‘int’
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With