Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Practical difference between int and char

Tags:

c

I have to analyse the output of these code fragments:

int x, y;
x = 200; y = 100;
x = x+y; y = x-y; x = x-y;
printf ("%d %d\n", x, y);

char x, y;
x = 200; y = 100;
x = x+y; y = x-y; x = x-y;
printf ("%d %d\n", x, y);

So, I know now that int stands for integer and char for character; I've read about the differences and if I put in the printf the %d, it returns in the form of digits, and %c, in the form of a character.

The ASCII character code for 'A' is 65 for example, but why does the second function print 100 -56, instead of 100 200?

like image 525
Souza Avatar asked Apr 08 '13 01:04

Souza


People also ask

What is the difference between int and char?

Char is short for character, and should be used for strings. Int is used for whole numbers. Never use char for number.

Should I use char or int?

Not only can you, but you should use the int type for character variables. The char type is for arrays of characters — i.e. strings. A single character is best stored as an int.

What is difference between int float and char?

int − Used to store an integer value. char − Used to store a single character. float − Used to store decimal numbers with single precision. double − Used to store decimal numbers with double precision.

Is a char actually an int?

Yes, a char is (typically) a one-byte integer. Except the compiler knows to treat it differently, typically with ASCII character semantics. Many libraries / headers define a BYTE type that is nothing more than an unsigned char , for storing one-byte integers.


2 Answers

On the platform used in the question, the type char seems to be 1 byte (8 bits) size and is a signed type with 1 sign bit and 7 value bits (and using 2's complement arithmetic). It stores values from -128 to 127. So, this is what's happening to x and y:

x = 200 => x takes value -56
y = 100 => y takes value 100
x = x+y => x takes value 44
y = x-y => y takes value -56
x = x-y => x takes value 100
like image 92
ssantos Avatar answered Sep 21 '22 17:09

ssantos


C has a variety of integer types: char (at least 8 bits), short (at least 16 bits), int (at least 16 bits), long (at least 32 bits). There are unsigned varieties of those. If you assign a value that is too large to a plain type, the results are undefined (you should never do that, the compiler may assume you never do, and not check at all). In the unsigned case, they "wrap around". But note that the sizes are not guaranteed, just their minimal sizes. There have been machines in which all were 32 bits wide.

like image 22
vonbrand Avatar answered Sep 18 '22 17:09

vonbrand