Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is using a non-32-bit integer reasonable? [duplicate]

Tags:

c++

types

Possible Duplicate:
The importance of using a 16bit integer

If today's processors perform (under standard conditions) 32-bit operations -- then is using a "short int" reasonable? Because in order to perform an operation on that data, it will convert it to a 32-bit (from 16-bit) integer, perform the operations, and then go back to 16-bit -- I think. So what is the point?

In essence my questions are as follows:

  1. What (if any) performance gain/hindrance does using a smaller ranged integer bring? Like, if instead of using a standard 32-bit integer for storage, I use a 16-bit short integer.
  2. "and then go back to 16-bit" -- Am I correct here? See above.
  3. Are all integer data stored as 32-bit integer space on CPU/RAM?
like image 556
user1908181 Avatar asked Dec 20 '12 02:12

user1908181


People also ask

How many 32-bit integers are there?

The number 2,147,483,647 (or hexadecimal 7FFFFFFF16) is the maximum positive value for a 32-bit signed binary integer in computing.

Is int always 32 bits in C?

int is always 32 bits wide. sizeof(T) represents the number of 8-bit bytes (octets) needed to store a variable of type T .


1 Answers

The answer to your first question should also clarify the last one: if you need to store large numbers of 16-bit ints, you save half the amount of memory required for 32-bit ints, with whatever "fringe benefits" that may come along with it, such as using the cache more efficiently.

Most CPUs these days have separate instructions for 16-bit vs. 32-bit operations, along with instructions to read and write 16-bit values from and to memory. Internally, the ALU may be performing a 32-bit operation, but the result for the upper half does not make it back into the registers.

like image 67
Sergey Kalinichenko Avatar answered Nov 04 '22 16:11

Sergey Kalinichenko