Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

data type ranges differing with operating systems

Tags:

c

The 8-bit,16-bit,32-bit,64-bit operating systems have different data range for integers,float and double values.

Is this the compiler or the processor that makes difference(8bit,16bit,32bit,64bit).

If in a network if a 16 bit integer data from one system is transferred to a 32 bit system or vice-versa will the data be correctly represented in memory.Please help me to understand.

like image 409
Angus Avatar asked Sep 01 '11 11:09

Angus


5 Answers

Ultimately, it is up to the compiler. The compiler is free to choose any data types it likes*, even if it has to emulate their behaviour with software routines. Of course, typically, for efficiency it will try to replicate the native types of the underlying hardware.

As to your second question, yes, of course, if you transfer the raw representation from one architecture to another, it may be interpreted incorrectly (endianness is another issue). That is why functions like ntohs() are used.

* Well, not literally anything it likes. The C standard places some constraints, such as that an int must be at least as large as a short.

like image 79
Oliver Charlesworth Avatar answered Oct 26 '22 22:10

Oliver Charlesworth


The compiler (more properly the "implementation") is free to choose the sizes, subject to the limits in the C standard. The set of sizes offered by C for its various types depends in part on the hardware it runs on; i.e. the compiler makes the choice but, it (except in cases like Java where datatypes are explicitly independent of underlying hardware) is strongly influenced by what the hardware offers.

like image 37
Abimaran Kugathasan Avatar answered Oct 27 '22 00:10

Abimaran Kugathasan


It depends not on just the compiler and operating system. It is dictated by the architecture (processor at least).

When passing data between possibly different architectures they use special fixed size data type, e.g. uint64_t, uint32_t instead of int, short etc.

But the size of integers is not the only concern when communicating between computers with different architectures, there's a byte order issue too (try googling about BigEndian and LittleEndian)

like image 36
unkulunkulu Avatar answered Oct 26 '22 23:10

unkulunkulu


The size of a given type depends on the CPU and the conventions on the operating system.

If you want to have an int of a specific size, use the stdint.h header [wikipedia]. It defines the int8_t, int16_t, int32_t, int64_t, some others and their unsigned equivalent.

For communications between different computers, the protocol should define the sizes and byte order to use.

like image 45
Arnaud Le Blanc Avatar answered Oct 26 '22 22:10

Arnaud Le Blanc


In a network it has to be defined in the protocol which data sizes you have. For endianness, it is highly recommended to use big endian values.

If there weren't the issue with the APIs, a compiler would be free to set its short, int, long as it wants. Bot often, the API calls are connected to these types. E.g. the open() function returns an int, whose size should be correct.

But the types might as well be part of the ABI definition.

like image 31
glglgl Avatar answered Oct 26 '22 23:10

glglgl