I do realize the title may sound silly at first, but please, bear with me for a moment. :)
Ever since I've started using size_t
and ptrdiff_t
, I haven't had a use for int
, as far back as I can remember.
The only integer data types I remember using recently fall into one of these categories:
(Unsigned) integers associated with an index into some in-memory data structure (e.g. vector
).
Almost always, the most appropriate type for this is size_t
(or ...::size_type
, if you're going the extra mile).
Even if the integer doesn't actually represent an index, often times it's still associated with some index, so size_t
is still appropriate.
Signed versions of size_t
. In many cases, the most suitable type for this seems to be ptrdiff_t
, because often times when you need this, you're working with iterators -- and hence size_t
and ptrdiff_t
are both appropriate for them.
long
. I occasionally need this for _InterlockedIncrement
(reference counting).
(unsigned
) long long
, used for holding file sizes.
unsigned int
or unsigned long
, useful for "counting" purposes (e.g. every 1 million iterations, update the UI).
unsigned char
for raw byte-level access to memory.
(Side note: I never found a use for signed char
either.)
intptr_t
and uintptr_t
for occasionally storing operating system handles, pointers, etc.
One particular aspect of int
that's important is that you shouldn't overflow it (since it's undefined behavior), so you can't even use it reliably for counting -- especially if your compiler defines it to be 16 bits.
So when, then, should you use int
(aside from when a dependency of yours already requires it)?
Is there any real use for it nowadays, at least in newly written, portable code?
An int variable contains only whole numbers Int, short for "integer," is a fundamental variable type built into the compiler and used to define numeric variables holding whole numbers. Other data types include float and double. C, C++, C# and many other programming languages recognize int as a data type.
So the difference is, in C, int main() can be called with any number of arguments, but int main(void) can only be called without any argument. Although it doesn't make any difference most of the times, using “int main(void)” is a recommended practice in C.
int main means that the main function returns an integer value.so in case of integer, we use int in C programming. Int keyword is used to specify integer datatype . It's size may be 16,32,64 bits depending on the machine or further short /long types. int is a datatype which is used to represent integer values.
Because the main function returns type integer,i.e either '1/program ran successfully' or '0/program compile error '.
How about the most important reason of all - readability. (and simple math)
long weeklyHours = daysWorked * hoursPerDay;
"Okay... but then again, how much can a human actually work per week that we need a long
"
size_t weeklyHours = daysWorked * hoursPerDay;
"Wait... are we using weeklyHours
to iterate over a vector?"
unsigned int weeklyHours = daysWorked * hoursPerDay;
"Clear enough." - possible source for errors if either can be negative (part of the logic - it could be a way to account for time off or leave, not important)
int weeklyHours = daysWorked * hoursPerDay;
"Okay, simple enough. I get what this is doing."
Luchian has some excellent readability points, to which I'll add some technical ones:
int
that's efficient to deal with, whereas long
might not be (risks more CPU cycles per operation, more bytes of machine code, more registers needed etc.)abs(a - b)
looks right mathematically but doesn't give the intuitive result when b
> a
and they're unsignedint second_delta = (x.seconds - y.seconds) + (x.minutes - y.minutes) * 60;
if (pending - completed > 1) kick_off_threads()
-1
is often used: for unsigned types this will be converted to the largest possible value, but that can lead to misunderstandings and coding errors (e.g. if (x >= 0)
test for non-sentinel)Also, there's a lot of scope for implicit conversions between signed and unsigned integers - it's important to understand that unsigned types rarely help enforce a "non-negative" invariant: if that's part of the appeal, you're better off writing a class with constructor and operators enforcing the invariant.
On the readability side, int
denotes a general need for a number that clearly spans the problem domain - it may be excessive but it's known cheap in machine code ops and CPU cycles so is the go-to type for general integral storage. If you start using say unsigned char
to store someone's age - not only does it not play well with operator<<(std::ostream&...)
, but it begs questions like "was there some need to conserve memory here?" (especially confusing for stack-based variables), "is there some intention to treat this as binary data for I/O or IPC purposes?", or even "is it a known single-digit age stored in ASCII?". If something's likely to end up in a register anyway, int
is a natural sizing.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With