Apart from using (byte[]) in streaming I don't really see byte and short used much. On the other hand I have seen long used where the actual value is |100| and byte would be more appropriate. Is this a consequence of the relative inexpensive nature of memory now or is this just minutia that developers needn't worry about?
int datatype is the most preferred type for numeric values. long datatype is less frequently used. It should only be used when the range of the numeric value is too high. It requires the most memory(8 bytes) in comparison to the other three data-types.
Looking at the benchmark results in @meriton's answer, it appears that using short and byte instead of int incurs a performance penalty for multiplication. Indeed, if you consider the operations in isolation, the penalty is significant.
short: The short data type is a 16-bit signed two's complement integer. It has a minimum value of -32,768 and a maximum value of 32,767 (inclusive). As with byte , the same guidelines apply: you can use a short to save memory in large arrays, in situations where the memory savings actually matters.
Using short can conserve memory if it is narrower than int , which can be important when using a large array. Your program will use more memory in a 32-bit int system compared to a 16-bit int system.
They are used when programming for embedded devices that are short on memory or disk space. Such as appliances and other electronic devices.
Byte is also used in low level web programming, where you send requests to web servers using headers, etc.
The byte
datatype is frequently used when dealing with raw data from a file or network connection, though it is mostly used as byte[]
. The short
and short[]
types are often used in connection with GUIs and image processing (for pixel locations & image sizes), and in sound processing.
The primary reason for using byte
or short
is one of clarity. The program code states uncategorically that only 8 or 16 bits are to be used, and when you accidentally use a larger type (without an appropriate typecast) you get a compilation error. (Admittedly, this could also be viewed as a nuisance when writing the code ... but once again the presence of the typecasts flags the fact that there is truncation happening to the reader.)
You don't achieve any space saving by using byte
or short
in simple variables instead of int
, because most Java implementations align stack variables and object members on word boundaries. However, primitive array types are handled differently; i.e. elements of boolean
, byte
, char
and short
arrays are byte aligned. But unless the arrays are large in size or large in number, they doesn't make any significant contribution to the app's overall memory usage.
So I guess that the main reason that developers don't use byte
or short
as much as you (a C developer?) might expect is that it really doesn't make much (or often any) difference. Java developers tend not to obsess over memory usage like old-school C developers did :-).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With