In most situations, one would declare a char
object to assign one of the character values on the ascii table ranging from 0 - 127. Even the extended character sets range from 128 - 255 (still positive). So i'm assuming that when dealing with the printing of characters, one only needs to use an unsigned char
.
Now, based on some research on SO, people use a signed char
when they need to use really small integers, but for that we can utilize the [u]int8
type. So i'm having trouble coming to terms with why one would need to use a signed char
? You can use it if you are dealing with the basic character ascii table (which unsigned char
is already capable of doing) or you can use it to represent small integers (which [u]int8
already takes care of).
Can someone please provide a programming example in which a signed char
is preferred over the other types ?
Signed char and unsigned char both are used to store single character. The variable stores the ASCII value of the characters. For an example if 'A' is stored, actually it will hold 65. For signed char we need not to write the signed keyword.
A signed char is a signed value which is typically smaller than, and is guaranteed not to be bigger than, a short . An unsigned char is an unsigned value which is typically smaller than, and is guaranteed not to be bigger than, a short .
Some properties of the signed char data type are: It is generally used to store 8-bit characters. Being a signed data type, it can store positive values as well as negative values. Size of 8 bits is occupied where 1 bit is used to store the sign of the value.
A signed char is same as an ordinary char and has a range from -128 to +127; whereas, an unsigned char has a range from 0 to 255.
The reason is that you don't know, at least portably, if plain char
variables are signed or unsigned. Different implementations have different approaches, a plain char
may be signed in one platform and unsigned in another.
If you want to store negative values in a variable of type char
, you absolutely must declare it as signed char
, because only then you can be sure that every platform will be able to store negative values in there. Yes, you can use [u]int8
type, but this was not always the case (it was only introduced in C++11), and in fact, int8
is most likely an alias for signed char
.
Moreover, uint8_t
and int8_t
are defined to be optional types, meaning you can't always rely on its existence (contrary to signed char
). In particular, if a machine has a byte unit with more than 8 bits, it is not very likely that uint8_t
and int8_t
are defined (although they can; a compiler is always free to provide it and do the appropriate calculations). See this related question: What is int8_t if a machine has > 8 bits per byte?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With