For one of my C++ programming assignments, I have to consider the size of an integer variable while designing the program. I read around the internet, and most places say "The size of an integer is dependent on the platform." I'm unclear about what this means, so I'm asking here.
What determines the size of a primitive data type?
What is the reason to choose an integer to have a size of 2 byte
s in some systems, and 4 byte
s in others? Is there any reason it cannot proceed with 2 byte anymore?
The size or range of the data that can be stored in an integer data type is determined by how many bytes are allocated for storage. Because a bit can hold 2 values, 0 or 1, you can calculate the number of possible values by calculating 2n where n is the number of bits.
A primitive type is predefined by the language and is named by a reserved keyword. Primitive values do not share state with other primitive values. The eight primitive data types supported by the Java programming language are: byte: The byte data type is an 8-bit signed two's complement integer.
whats the platform they means here.
Usually, it means the combination of operating system, compiler, and some special options of compiler.
what cause to decide the primitive data type sizes.
That would be 'Combination of above.'
By the way, this is called 'memory model' or 'data model' (not sure which one is the correct term), you could learn more about it from http://en.wikipedia.org/wiki/64-bit
What determines the size of a primitive data type?
It depends on Compiler. Compiler in turns usually depends on the architecture, processor, development environment etc because it takes them into account. So you may say it's a combination of all.
What is the reason to choose an integer to have a size of 2 bytes in some systems, and 4 bytes in others? Is there any reason it cannot proceed with 2 byte anymore?
The C++ standard does not specify the size of integral types in bytes, but it specifies minimum ranges they must be able to hold. You can infer minimum size in bits from the required range. You can infer minimum size in bytes from that and the value of the CHAR_BIT
macro that defines the number of bits in a byte (in all but the most obscure platforms it's 8, and it can't be less than 8).
Check out here for more info.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With