Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Difference between u8, uint8_t, __u8 and __be8

While browsing linux networking code, I came across these datatypes:

  1. u8
  2. uint8_t
  3. __u8
  4. __be8

(same things for 16, 32 and 64 bits)

Can someone please explain the difference between these datatypes and where to use which? I have seen the definitions of these datatypes but those were not clear to me.

like image 939
Soumen Avatar asked Nov 12 '14 03:11

Soumen


People also ask

What is __ uint8_t?

uint8_t is available on systems where there is a native type with exactly eight bits. If there is no such type, then uint8_t is not defined. This has nothing to do with unix, linux, OS X, or whatever.

What is the difference between Uint8 and uint8_t?

The difference between Uint8 and uint8_t will depend on implementation, but usually they will both be 8 bit unsigned integers. Also uint8_t and uint16_t are defined by C (and maybe C++) standard in stdint. h header, Uint8 and Uint16 are non-standard as far as I know.

Does u8 mean unsigned 8-bit?

u8 : The 8-bit unsigned integer type. u16 : The 16-bit unsigned integer type. u32 : The 32-bit unsigned integer type.

Is uint8_t the same as char c++?

If the intended use of the variable is to hold an unsigned numerical value, use uint8_t; If the intended use of the variable is to hold a signed numerical value, use int8_t; If the intended use of the variable is to hold a printable character, use char.


2 Answers

uint8_t is Standard C and represents an unsigned 8-bit integral type. If you are on a system that does not have 8-bit addressable units then this will not be defined; otherwise it is probably a typedef for unsigned char.

Anything with __ in it is reserved for implementation use. This means that compiler writers and standard library writers can use those identifiers without worrying about a name clash with user code. You may see this when looking in the internals of standard library implementation.

u8 is non-standard but almost certainly means the same as uint8_t. A reason that u8 might be used is in code that was written before uint8_t was added to Standard C.

like image 74
M.M Avatar answered Oct 09 '22 12:10

M.M


The last denotes big-endian (Not significant for a single byte), aka network-byte-order.

The others are all identical to each other, an unsigned type of 8 bits / 1 byte.

The above all holds for those types with 16 bits / 2 byte resp. 32 bits / 4 bytes too.

(Host byte-order on x86 for example is little-endian, and Linux only runs on 8-bit-byte machines.)

Out of the list only uint8_t is defined by the C standard (in <stdint.h), and the last two are in the implementations name-space, which is generally a bad idea.

like image 21
Deduplicator Avatar answered Oct 09 '22 12:10

Deduplicator