In C, the integer (for 32 bit machine) is 32 bits, and it ranges from -32,768 to +32,767. In Java, the integer(long) is also 32 bits, but ranges from -2,147,483,648 to +2,147,483,647.
I do not understand how the range is different in Java, even though the number of bits is the same. Can someone explain this?
To find the max value for the unsigned integer data type, we take 2 to the power of 16 and substract by 1, which would is 65,535 . We get the number 16 from taking the number of bytes that assigned to the unsigned short int data type (2) and multiple it by the number of bits assigned to each byte (8) and get 16.
The int type in Java can be used to represent any whole number from -2147483648 to 2147483647. Why those numbers?
A 32-bit signed integer. It has a minimum value of -2,147,483,648 and a maximum value of 2,147,483,647 (inclusive).
A signed integer is a 32-bit datum that encodes an integer in the range [-2147483648 to 2147483647]. An unsigned integer is a 32-bit datum that encodes a nonnegative integer in the range [0 to 4294967295]. The signed integer is represented in twos complement notation.
In C, the language itself does not determine the representation of certain datatypes. It can vary from machine to machine, on embedded systems the int
can be 16 bit wide, though usually it is 32 bit.
The only requirement is that short int
<= int
<= long int
by size. Also, there is a recommendation that int
should represent the native capacity of the processor.
All types are signed. The unsigned
modifier allows you to use the highest bit as part of the value (otherwise it is reserved for the sign bit).
Here's a short table of the possible values for the possible data types:
width minimum maximum signed 8 bit -128 +127 signed 16 bit -32 768 +32 767 signed 32 bit -2 147 483 648 +2 147 483 647 signed 64 bit -9 223 372 036 854 775 808 +9 223 372 036 854 775 807 unsigned 8 bit 0 +255 unsigned 16 bit 0 +65 535 unsigned 32 bit 0 +4 294 967 295 unsigned 64 bit 0 +18 446 744 073 709 551 615
In Java, the Java Language Specification determines the representation of the data types.
The order is: byte
8 bits, short
16 bits, int
32 bits, long
64 bits. All of these types are signed, there are no unsigned versions. However, bit manipulations treat the numbers as they were unsigned (that is, handling all bits correctly).
The character data type char
is 16 bits wide, unsigned, and holds characters using UTF-16 encoding (however, it is possible to assign a char
an arbitrary unsigned 16 bit integer that represents an invalid character codepoint)
width minimum maximum SIGNED byte: 8 bit -128 +127 short: 16 bit -32 768 +32 767 int: 32 bit -2 147 483 648 +2 147 483 647 long: 64 bit -9 223 372 036 854 775 808 +9 223 372 036 854 775 807 UNSIGNED char 16 bit 0 +65 535
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With