Possible Duplicate:
The importance of using a 16bit integer
If today's processors perform (under standard conditions) 32-bit operations -- then is using a "short int" reasonable? Because in order to perform an operation on that data, it will convert it to a 32-bit (from 16-bit) integer, perform the operations, and then go back to 16-bit -- I think. So what is the point?
In essence my questions are as follows:
The number 2,147,483,647 (or hexadecimal 7FFFFFFF16) is the maximum positive value for a 32-bit signed binary integer in computing.
int is always 32 bits wide. sizeof(T) represents the number of 8-bit bytes (octets) needed to store a variable of type T .
The answer to your first question should also clarify the last one: if you need to store large numbers of 16-bit int
s, you save half the amount of memory required for 32-bit int
s, with whatever "fringe benefits" that may come along with it, such as using the cache more efficiently.
Most CPUs these days have separate instructions for 16-bit vs. 32-bit operations, along with instructions to read and write 16-bit values from and to memory. Internally, the ALU may be performing a 32-bit operation, but the result for the upper half does not make it back into the registers.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With