In some platform, int32_t
(from stdint.h) is long int
, but in other platform, it could be int
. When I want to use printf
, how can I determine which format, "%ld"
or "%d"
, should be used?
Or, perhaps, I should force converting it to long like below:
int32_t m;
m = 3;
printf ("%ld\n", (long)m);
But that solution is tedious. Any suggestions?
In C (since C99), the inttypes. h contains macros that expand to format specifiers for the fixed-width types. For int32_t : printf("%" PRId32 "\n", m);
%d is a signed integer, while %u is an unsigned integer. Pointers (when treated as numbers) are usually non-negative. If you actually want to display a pointer, use the %p format specifier.
In C programming language, %d and %i are format specifiers as where %d specifies the type of variable as decimal and %i specifies the type as integer. In usage terms, there is no difference in printf() function output while printing a number using %d or %i but using scanf the difference occurs.
The Printf module API details the type conversion flags, among them: %B: convert a boolean argument to the string true or false %b: convert a boolean argument (deprecated; do not use in new programs).
In C (since C99), the inttypes.h
contains macros that expand to format specifiers for the fixed-width types. For int32_t
:
printf("%" PRId32 "\n", m);
That macro is likely to expand to "d"
or "ld"
. You can put the usual modifiers and so on, e.g.:
printf("%03" PRId32 "\n", m);
In C++ (since C++11) the same facility is available with #include <inttypes.h>
, or #include <cinttypes>
.
Apparently, some C++ implementations require the user to write #define __STDC_FORMAT_MACROS 1
before #include <inttypes.h>
, even though the C++ Standard specifies that is not required.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With