Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Difference between signed / unsigned char [duplicate]

Tags:

c

types

unsigned

People also ask

What is the difference between a signed and unsigned char?

An unsigned type can only represent postive values (and zero) where as a signed type can represent both positive and negative values (and zero). In the case of a 8-bit char this means that an unsigned char variable can hold a value in the range 0 to 255 while a signed char has the range -128 to 127.

What is the difference between char and signed char?

By the C++ standard, a signed char is guaranteed to be able to hold values -127 to 127 (not -128!), whereas a unsigned char is able to hold values 0 to 255.

Why do we need signed and unsigned char in C?

Signed char and unsigned char both are used to store single character. The variable stores the ASCII value of the characters. For an example if 'A' is stored, actually it will hold 65. For signed char we need not to write the signed keyword.

Is char signed or unsigned arm?

Unsigned char must be used for accessing memory as a block of bytes or for small unsigned integers. Signed char must be used for small signed integers and simple char must be used only for ASCII characters and strings.


There's no dedicated "character type" in C language. char is an integer type, same (in that regard) as int, short and other integer types. char just happens to be the smallest integer type. So, just like any other integer type, it can be signed or unsigned.

It is true that (as the name suggests) char is mostly intended to be used to represent characters. But characters in C are represented by their integer "codes", so there's nothing unusual in the fact that an integer type char is used to serve that purpose.

The only general difference between char and other integer types is that plain char is not synonymous with signed char, while with other integer types the signed modifier is optional/implied.


I slightly disagree with the above. The unsigned char simply means: Use the most significant bit instead of treating it as a bit flag for +/- sign when performing arithmetic operations.

It makes significance if you use char as a number for instance:

typedef char BYTE1;
typedef unsigned char BYTE2;

BYTE1 a;
BYTE2 b;

For variable a, only 7 bits are available and its range is (-127 to 127) = (+/-)2^7 -1. For variable b all 8 bits are available and the range is 0 to 255 (2^8 -1).

If you use char as character, "unsigned" is completely ignored by the compiler just as comments are removed from your program.


There are three char types: (plain) char, signed char and unsigned char. Any char is usually an 8-bit integer* and in that sense, a signed and unsigned char have a useful meaning (generally equivalent to uint8_t and int8_t). When used as a character in the sense of text, use a char (also referred to as a plain char). This is typically a signed char but can be implemented either way by the compiler.

* Technically, a char can be any size as long as sizeof(char) is 1, but it is usually an 8-bit integer.


Representation is the same, the meaning is different. e.g, 0xFF, it both represented as "FF". When it is treated as "char", it is negative number -1; but it is 255 as unsigned. When it comes to bit shifting, it is a big difference since the sign bit is not shifted. e.g, if you shift 255 right 1 bit, it will get 127; shifting "-1" right will be no effect.