This question was asked to me in an interview, that size of char
is 2 bytes in some OS, but in some operating system it is 4 bytes or different.
Why is that so?
Why is it different from other fundamental types, such as int
?
The size of data types is dependent on the compiler or you can say that the system architecture i.e. 32 bit compiler or 64 bit compiler. The size of data type int is 2 byte in 32 bit architecture or 4 bytes in 64 bit architecture.
The size or range of the data that can be stored in an integer data type is determined by how many bytes are allocated for storage. Because a bit can hold 2 values, 0 or 1, you can calculate the number of possible values by calculating 2n where n is the number of bits.
The JVM (Java Virtual Machine) is designed to be platform independent. If data type sizes were different across platforms, then cross-platform consistency is sacrificed. The JVM isolates the program from the underlying OS and platform.
Different sizes can use multiple CPU registers. For example, if there is a need to store a number greater than 2^32 in a 32-bit machine, then two registers are required, the same for a 64-bit machine.
That was probably a trick question. The sizeof(char)
is always 1.
If the size differs, it's probably because of a non-conforming compiler, in which case the question should be about the compiler itself, not about the C or C++ language.
1 The sizeof operator yields the number of bytes in the object representation of its operand. The operand is either an expression, which is not evaluated, or a parenthesized type-id. The sizeof operator shall not be applied to an expression that has function or incomplete type, or to an enumeration type before all its enumerators have been declared, or to the parenthesized name of such types, or to an lvalue that designates a bit-field.
sizeof(char)
,sizeof(signed char)
andsizeof(unsigned char)
are 1. The result of sizeof applied to any other fundamental type (3.9.1) is implementation-defined. (emphasis mine)
The sizeof of other types than the ones pointed out are implementation-defined, and they vary for various reasons. An int
has better range if it's represented in 64 bits instead of 32, but it's also more efficient as 32 bits on a 32-bit architecture.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With