Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Difference between char and int when declaring character

Tags:

c

I just started learning C and am rather confused over declaring characters using int and char.

I am well aware that any characters are made up of integers in the sense that the "integers" of characters are the characters' respective ASCII decimals.

That said, I learned that it's perfectly possible to declare a character using int without using the ASCII decimals. Eg. declaring variable test as a character 'X' can be written as:

char test = 'X'; 

and

int test = 'X'; 

And for both declaration of character, the conversion characters are %c (even though test is defined as int).

Therefore, my question is/are the difference(s) between declaring character variables using char and int and when to use int to declare a character variable?

like image 750
xhxh96 Avatar asked May 15 '16 17:05

xhxh96


People also ask

Is char and int in c same?

char: The most basic data type in C. It stores a single character and requires a single byte of memory in almost all compilers. int: As the name suggests, an int variable is used to store an integer. float: It is used to store decimal numbers (numbers with floating point value) with single precision.

What is the difference between char a 5 and int A 5?

Answer. char a[5] is an array of char data type that can hold 5 characters whereas int a[5] is an array of int data type that can hold 5 integer values.

What is the difference between char and character?

char is a primitive type that represents a single 16 bit Unicode character while Character is a wrapper class that allows us to use char primitive concept in OOP-kind of way.

Is char an integer type?

Yes, a char is (typically) a one-byte integer. Except the compiler knows to treat it differently, typically with ASCII character semantics. Many libraries / headers define a BYTE type that is nothing more than an unsigned char , for storing one-byte integers.


1 Answers

The difference is the size in byte of the variable, and from there the different values the variable can hold.

A char is required to accept all values between 0 and 127 (included). So in common environments it occupies exactly one byte (8 bits). It is unspecified by the standard whether it is signed (-128 - 127) or unsigned (0 - 255).

An int is required to be at least a 16 bits signed word, and to accept all values between -32767 and 32767. That means that an int can accept all values from a char, be the latter signed or unsigned.

If you want to store only characters in a variable, you should declare it as char. Using an int would just waste memory, and could mislead a future reader. One common exception to that rule is when you want to process a wider value for special conditions. For example the function fgetc from the standard library is declared as returning int:

int fgetc(FILE *fd); 

because the special value EOF (for End Of File) is defined as the int value -1 (all bits to one in a 2-complement system) that means more than the size of a char. That way no char (only 8 bits on a common system) can be equal to the EOF constant. If the function was declared to return a simple char, nothing could distinguish the EOF value from the (valid) char 0xFF.

That's the reason why the following code is bad and should never be used:

char c;    // a terrible memory saving... ... while ((c = fgetc(stdin)) != EOF) {   // NEVER WRITE THAT!!!     ... } 

Inside the loop, a char would be enough, but for the test not to succeed when reading character 0xFF, the variable needs to be an int.

like image 85
Serge Ballesta Avatar answered Oct 01 '22 01:10

Serge Ballesta