The 8-bit,16-bit,32-bit,64-bit operating systems have different data range for integers,float and double values.
Is this the compiler or the processor that makes difference(8bit,16bit,32bit,64bit).
If in a network if a 16 bit integer data from one system is transferred to a 32 bit system or vice-versa will the data be correctly represented in memory.Please help me to understand.
Ultimately, it is up to the compiler. The compiler is free to choose any data types it likes*, even if it has to emulate their behaviour with software routines. Of course, typically, for efficiency it will try to replicate the native types of the underlying hardware.
As to your second question, yes, of course, if you transfer the raw representation from one architecture to another, it may be interpreted incorrectly (endianness is another issue). That is why functions like ntohs()
are used.
* Well, not literally anything it likes. The C standard places some constraints, such as that an int
must be at least as large as a short
.
The compiler (more properly the "implementation") is free to choose the sizes, subject to the limits in the C standard. The set of sizes offered by C for its various types depends in part on the hardware it runs on; i.e. the compiler makes the choice but, it (except in cases like Java where datatypes are explicitly independent of underlying hardware) is strongly influenced by what the hardware offers.
It depends not on just the compiler and operating system. It is dictated by the architecture (processor at least).
When passing data between possibly different architectures they use special fixed size data type, e.g. uint64_t
, uint32_t
instead of int
, short
etc.
But the size of integers is not the only concern when communicating between computers with different architectures, there's a byte order issue too (try googling about BigEndian and LittleEndian)
The size of a given type depends on the CPU and the conventions on the operating system.
If you want to have an int of a specific size, use the stdint.h header [wikipedia]. It defines the int8_t
, int16_t
, int32_t
, int64_t
, some others and their unsigned equivalent.
For communications between different computers, the protocol should define the sizes and byte order to use.
In a network it has to be defined in the protocol which data sizes you have. For endianness, it is highly recommended to use big endian values.
If there weren't the issue with the APIs, a compiler would be free to set its short
, int
, long
as it wants. Bot often, the API calls are connected to these types. E.g. the open()
function returns an int
, whose size should be correct.
But the types might as well be part of the ABI definition.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With