I am working with code that uses types like uint8
(1 byte wide unsigned integer), uint16
(2 byte wide unsigned integer), uint32
(4 byte wide unsigned integer), etc..
My questions are:
Are uint8
and char
the same size?
Are uint16
and int
the same size?
Are uint32
and long
the same size?
Are uint64
and double
the same size?
for example:
uint8 c[20];
Shouldn't the size of this be 20?
I wrote some code in visual studio:
#include <stdio.h>
#include <string.h>
typedef unsigned int uint32;
typedef unsigned int uint8;
int main()
{
double a = 1320.134;
uint32 b;
uint8 c[20];
b = (unsigned int)a;
c[3] = b;
printf("value is %d", c[3]);
return 1;
}
but the size of c
in debug mode is 50. Why is that?
The UInt32 ranges from 0 to 4294967295. The UInt64 ranges from 0 to 18446744073709551615.
UInt8 == 8 bit == 1 byte, UInt16 == 16 bit == 2 byte, UInt32 == 32 bit == 4 byte.
A 32-bit unsigned integer value type.
2.2. A UINT8 is an 8-bit unsigned integer (range: 0 through 255 decimal).
uint8
, uint16
, uint32
, and uint64
are probably Microsoft-specific types.
As of the 1999 standard, C supports standard typedefs with similar meanings, defined in <stdint.h>
: uint8_t
, uint16_t
, uint32_t
, and uint64_t
. I'll assume that the Microsoft-specific types are defined similarly. Microsoft does support <stdint.h>
, at least as of Visual Studio 2010, but older code may use uint8
et al.
The predefined types char
, short
, int
et al have sizes that vary from one C implementation to another. The C standard has certain minimum requirements (char
is at least 8 bits, short
and int
are at least 16, long
is at least 32, and each type in that list is at least as wide as the previous type), but permits some flexibility. For example, I've seen systems where int
is 16, 32, or 64 bits.
char
is almost always exactly 8 bits, but it's permitted to be wider. And plain char
may be either signed or unsigned.
uint8_t
is required to be an unsigned integer type that's exactly 8 bits wide. It's likely to be a typedef for unsigned char
, though it might be a typedef for plain char
if plain char
happens to be unsigned. If there is no predefined 8-bit unsigned type, then uint8_t
will not be defined at all.
Similarly, each uintN_t
type is an unsigned type that's exactly N bits wide.
In addition, <stdint.h>
defines corresponding signed intN_t
types, as well as int_fastN_t
and int_leastN_t
types that are at least the specified width.
The [u]intN_t
types are guaranteed to have no padding bits, so the size of each is exactly N bits. The signed intN_t
types are required to use a 2's-complement representation.
Although uint32_t
might be the same as unsigned int
, for example, you shouldn't assume that. Use unsigned int
when you need an unsigned integer type that's at least 16 bits wide, and that's the "natural" size for the current system. Use uint32_t
when you need an unsigned integer type that's exactly 32 bits wide.
(And no, uint64
or uint64_t
is not the same as double
; double
is a floating-point type.)
It's quite unclear how you are computing the size ("the size in debug mode"?").
Use printf()
:
printf("the size of c is %u\n", (unsigned int) sizeof c);
Normally you'd print a size_t
value (which is the type sizeof
returns) with %zu
, but if you're using a pre-C99 compiler like Visual Studio that won't work.
You need to find the typedef
statements in your code that define the custom names like uint8
and so on; those are not standard so nobody here can know how they're defined in your code.
New C code should use <stdint.h>
which gives you uint8_t
and so on.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With