Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Omitting the datatype (e.g. "unsigned" instead of "unsigned int")

Tags:

c++

c

types

I know that if the data type declaration is omitted in C/C++ code in such way: unsigned test=5;, the compiler automatically makes this variable an int (an unsigned int in this case). I've heard that it's a C standard and it will work in all compilers.

But I've also heard that doing this is considered a bad practice.

What do you think? Should I really type unsigned int instead of just unsigned?

Are short, long and long long also datatypes?

like image 251
rhino Avatar asked Nov 30 '10 16:11

rhino


People also ask

What is the unsigned data type?

An unsigned data type simply means that the data type will only hold positive values; negatives aren't allowed to be stored in the data type. Unsigned data types include int, char, short, and long.

What is unsigned data type in C++?

The unsigned keyword is a data type specifier, that makes a variable only represent non-negative integer numbers (positive numbers and zero). It can be applied only to the char , short , int and long types.

Is unsigned int better to use?

Unsigned integers are used when we know that the value that we are storing will always be non-negative (zero or positive). Note: it is almost always the case that you could use a regular integer variable in place of an unsigned integer.

Is unsigned unsigned int short?

For unsigned ( int and short ), the range must be at least 0 to 65535 , so that too must be at least 16 bits wide. Also, the standard mandates that the range of (unsigned) short is contained in the range of (unsigned) int , and the range of (unsigned) char must be contained in the range of (unsigned) short .


2 Answers

unsigned is a data type! And it happens to alias to unsigned int.

When you’re writing unsigned x; you are not omitting any data type.

This is completely different from “default int” which exists in C (but not in C++!) where you really omit the type on a declaration and C automatically infers that type to be int.

As for style, I personally prefer to be explicit and thus to write unsigned int. On the other hand, I’m currently involved in a library where it’s convention to just write unsigned, so I do that instead.

like image 94
Konrad Rudolph Avatar answered Sep 23 '22 02:09

Konrad Rudolph


I would even take it one step further and use stdint's uint32_t type.
It might be a matter of taste, but I prefer to know what primitive I'm using over some ancient consideration of optimising per platform.

like image 36
stefaanv Avatar answered Sep 21 '22 02:09

stefaanv