Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

When to use `short` over `int`?

Tags:

c++

int

short

There are many questions that asks for difference between the short and int integer types in C++, but practically, when do you choose short over int?

like image 769
dayuloli Avatar asked Jun 23 '14 16:06

dayuloli


People also ask

Is short more efficient than int?

A CPU works more efficient when the data with equals to the native CPU register width. This applies indirect to . NET code as well. In most cases using int in a loop is more efficient than using short.

What is the difference between short and int?

short and int must be at least 16 bits, long must be at least 32 bits, and that short is no longer than int, which is no longer than long. Typically, short is 16 bits, long is 32 bits, and int is either 16 or 32 bits.

Which is better a char short or int type for optimization?

Prefer int to char or shortWe should always prefer int to char because C performs all operations of char with an integer. In all operations like passing char to a function or an arithmetic operation, first char will be converted into integer and after compilation of operation, it will again be converted into char.

What is the difference between short and long integer?

All store integers, but consume different memory and have different ranges. For eg: short int consumes 16bits, long int consumes 32bits and long long int consumes 64bits. The maximum value of short int is 32767. So of you need to store big values you need to use long int .


2 Answers

(See Eric's answer for more detailed explanation)

Notes:

  • Generally, int is set to the 'natural size' - the integer form that the hardware handles most efficiently
  • When using short in an array or in arithmetic operations, the short integer is converted into int, and so this can introduce a hit on the speed in processing short integers
  • Using short can conserve memory if it is narrower than int, which can be important when using a large array
  • Your program will use more memory in a 32-bit int system compared to a 16-bit int system

Conclusion:

  • Use int unless you conserving memory is critical, or your program uses a lot of memory (e.g. many arrays). In that case, use short.
like image 114
dayuloli Avatar answered Oct 10 '22 19:10

dayuloli


You choose short over int when:

Either

  • You want to decrease the memory footprint of the values you're storing (for instance, if you're targeting a low-memory platform),
  • You want to increase performance by increasing either the number of values that can be packed into a single memory page (reducing page faults when accessing your values) and/or in the memory caches (reducing cache misses when accessing values), and profiling has revealed that there are performance gains to be had here,
  • Or you are sending data over a network or storing it to disk, and want to decrease your footprint (to take up less disk space or network bandwidth). Although for these cases, you should prefer types which specify exactly the size in bits rather than int or short, which can vary based on platform (as you want a platform with a 32-bit short to be able to read a file written on a platform with a 16-bit short). Good candidates are the types defined in stdint.h.

And:

  • You have a numeric value which does not need to take on any values that can't be stored in a short on your target platform (for a 16-bit short, this is -32768-32767, or 0-65535 for a 16-bit unsigned short).
  • Your target platform (or one of you r target platforms) uses less memory for a short than for an int. The standard only guarantees that short is not larger than int, so implementations are allowed to have the same size for a short and for an int.

Note:

chars can also be used as arithmetic types. An answer to "When should I use char instead of short or int?" would read very similarly to this one, but with different numbers (-128-127 for an 8-bit char, 0-255 for an 8-bit unsigned char)

In reality, you likely don't actually want to use the short type specifically. If you want an integer of specific size, there are types defined in <cstdint> that should be preferred, as, for example, an int16_t will be 16 bits on every system, whereas you cannot guarantee the size of a short will be the same across all targets your code will be compiled for.

like image 44
Eric Finn Avatar answered Oct 10 '22 19:10

Eric Finn