Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Long Vs. Int C/C++ - What's The Point?

As I've learned recently, a long in C/C++ is the same length as an int. To put it simply, why? It seems almost pointless to even include the datatype in the language. Does it have any uses specific to it that an int doesn't have? I know we can declare a 64-bit int like so:

long long x = 0; 

But why does the language choose to do it this way, rather than just making a long well...longer than an int? Other languages such as C# do this, so why not C/C++?

like image 380
MGZero Avatar asked Sep 17 '11 18:09

MGZero


People also ask

Why do we use long long int in C?

long int. A long int typically uses twice as many bits as a regular int, allowing it to hold much larger numbers. printf and scanf replace %d or %i with %ld or %li to indicate the use of a long int. long int may also be specified as just long.

What is the point of long int?

The long is a larger data type than int. The difference between int and long is that int is 32 bits in width while long is 64 bits in width.

Should I use long or int?

Long is the Object form of long , and Integer is the object form of int . The long uses 64 bits. The int uses 32 bits, and so can only hold numbers up to ±2 billion (-231 to +231-1). You should use long and int , except where you need to make use of methods inherited from Object , such as hashcode .

Why is int the same as long?

Compiler designers tend to to maximize the performance of int arithmetic, making it the natural size for the underlying processor or OS, and setting up the other types accordingly. But the use of long int , since int can be omitted, it's just the same as long by definition.


2 Answers

When writing in C or C++, every datatype is architecture and compiler specific. On one system int is 32, but you can find ones where it is 16 or 64; it's not defined, so it's up to compiler.

As for long and int, it comes from times, where standard integer was 16bit, where long was 32 bit integer - and it indeed was longer than int.

like image 91
Griwes Avatar answered Sep 22 '22 17:09

Griwes


The specific guarantees are as follows:

  • char is at least 8 bits (1 byte by definition, however many bits it is)
  • short is at least 16 bits
  • int is at least 16 bits
  • long is at least 32 bits
  • long long (in versions of the language that support it) is at least 64 bits
  • Each type in the above list is at least as wide as the previous type (but may well be the same).

Thus it makes sense to use long if you need a type that's at least 32 bits, int if you need a type that's reasonably fast and at least 16 bits.

Actually, at least in C, these lower bounds are expressed in terms of ranges, not sizes. For example, the language requires that INT_MIN <= -32767, and INT_MAX >= +32767. The 16-bit requirements follows from this and from the requirement that integers are represented in binary.

C99 adds <stdint.h> and <inttypes.h>, which define types such as uint32_t, int_least32_t, and int_fast16_t; these are typedefs, usually defined as aliases for the predefined types.

(There isn't necessarily a direct relationship between size and range. An implementation could make int 32 bits, but with a range of only, say, -2**23 .. +2^23-1, with the other 8 bits (called padding bits) not contributing to the value. It's theoretically possible (but practically highly unlikely) that int could be larger than long, as long as long has at least as wide a range as int. In practice, few modern systems use padding bits, or even representations other than 2's-complement, but the standard still permits such oddities. You're more likely to encounter exotic features in embedded systems.)

like image 34
Keith Thompson Avatar answered Sep 22 '22 17:09

Keith Thompson