Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Difference between uint and unsigned int?

Tags:

c

gcc

uint

Is there any difference between uint and unsigned int?

I'm looking in this site, but all questions refer to C# or C++. I'd like an answer about the C language.

If it is relevant, note that I'm using GCC under Linux.

like image 625
the_candyman Avatar asked Apr 15 '11 14:04

the_candyman


People also ask

Is unsigned int same as Uint?

uint isn't a standard type - unsigned int is. and what does this fact implies? That code written with uint won't be inherently portable unless uint is a typedef that you declare actually inside that code.

What is the difference between Uint and int?

uint means “unsigned integer” while int means “signed integer”. Unsigned integers only contain positive numbers (or zero).

Should I use UINT or int?

Since we use number with positive and negative integers more often than positive integers only, the type Int is the signed integers. If we want a value without a sign, then we use the type UInt . UInt creates a integer of the same bit size as the device's processor can handle.

What is the difference between unsigned and unsigned int?

There is no difference. unsigned and unsigned int are both synonyms for the same type (the unsigned version of the int type).


6 Answers

uint isn't a standard type - unsigned int is.

like image 133
Erik Avatar answered Sep 29 '22 02:09

Erik


Some systems may define uint as a typedef.

typedef unsigned int uint;

For these systems they are same. But uint is not a standard type, so every system may not support it and thus it is not portable.

like image 29
taskinoor Avatar answered Sep 29 '22 04:09

taskinoor


I am extending a bit answers by Erik, Teoman Soygul and taskinoor

uint is not a standard.

Hence using your own shorthand like this is discouraged:

typedef unsigned int uint;

If you look for platform specificity instead (e.g. you need to specify the number of bits your int occupy), including stdint.h:

#include <stdint.h>

will expose the following standard categories of integers:

  • Integer types having certain exact widths

  • Integer types having at least certain specified widths

  • Fastest integer types having at least certain specified widths

  • Integer types wide enough to hold pointers to objects

  • Integer types having greatest width

For instance,

Exact-width integer types

The typedef name int N _t designates a signed integer type with width N, no padding bits, and a two's-complement representation. Thus, int8_t denotes a signed integer type with a width of exactly 8 bits.

The typedef name uint N _t designates an unsigned integer type with width N. Thus, uint24_t denotes an unsigned integer type with a width of exactly 24 bits.

defines

int8_t
int16_t
int32_t
uint8_t
uint16_t
uint32_t
like image 25
Yauhen Yakimovich Avatar answered Sep 29 '22 03:09

Yauhen Yakimovich


All of the answers here fail to mention the real reason for uint.
It's obviously a typedef of unsigned int, but that doesn't explain its usefulness.

The real question is,

Why would someone want to typedef a fundamental type to an abbreviated version?

To save on typing?
No, they did it out of necessity.

Consider the C language; a language that does not have templates.
How would you go about stamping out your own vector that can hold any type?

You could do something with void pointers,
but a closer emulation of templates would have you resorting to macros.

So you would define your template vector:

#define define_vector(type) \
  typedef struct vector_##type { \
    impl \
  };

Declare your types:

define_vector(int)
define_vector(float)
define_vector(unsigned int)

And upon generation, realize that the types ought to be a single token:

typedef struct vector_int { impl };
typedef struct vector_float { impl };
typedef struct vector_unsigned int { impl };
like image 45
Trevor Hickey Avatar answered Sep 29 '22 04:09

Trevor Hickey


The unsigned int is a built in (standard) type so if you want your project to be cross-platform, always use unsigned int as it is guarantied to be supported by all compilers (hence being the standard).

like image 34
Teoman Soygul Avatar answered Sep 29 '22 03:09

Teoman Soygul


The uint is a possible and proper abbreviation for unsigned int. It is better readable. But: It is not C standard. You can define and use it (as all other defines) to your own responsibiity. But unfortunately some system headers define uint too. I have found in a sys/types.h from a currently compiler (ARM):

 # ifndef   _POSIX_SOURCE
  //....
 typedef    unsigned short  ushort;     /* System V compatibility */
 typedef    unsigned int    uint;       /* System V compatibility */
 typedef    unsigned long   ulong;      /* System V compatibility */
 # endif    /*!_POSIX_SOURCE */

It seems to be a concession for familiary sources programmed as Unix System V standard. To switch off this undesired behaviour (because I want to

#define uint unsigned int 

by myself, I have set firstly

#define _POSIX_SOURCE

A system's header must not define things which is not standard. But there are many things which are defined there, unfortunately.

See also on my web page https://www.vishia.org/emc/html/Base/int_pack_endian.html#truean-uint-problem-admissibleness-of-system-definitions resp. https://www.vishia.org/emc.

like image 22
Hartmut Schorrig Avatar answered Sep 29 '22 03:09

Hartmut Schorrig