For the last day I have had a nasty bug in my code which after some searching appears to be related to comparison between char values and hex. My compiler is gcc 4.4.1 running on Windows. I replicated the problem in the simple code below:
char c1 = 0xFF; char c2 = 0xFE;
if(c1 == 0xFF && c2 == 0xFE)
{
//do something
}
Surprisingly the code above does not get in the loop. I have absolutely no idea why and would really appreciate some help on this. It is so absurd that the solution must be (as always) a huge mistake on my part that I totally overlooked.
If I replace the above with unsigned chars it works, but only in some cases. I am struggling to find out what's going on. In addition if I cast the hex values to char in comparison it enters the loop correctly like so:
if(c1 == (char)0xFF && c2 == (char)0xFE)
{
//do something
}
What does that mean? Why can it be happening? Isn't the raw hex value interpreted as a char by default? For the curious the point in my code where I first noticed it is comparison of first 2 bytes of a stream with the above hex value and their reverse to idenity the Byte Order Mark.
Any help is appreciated
Plain char
can be signed
or unsigned
. If the type is unsigned
, then all works as you'd expect.
If the type is signed
, then assigning 0xFF to c1
means that the value will be promoted to -1
when the comparison is executed, but the 0xFF is a regular positive integer, so the comparison of -1 == 0xFF
fails.
Note that the types char
, signed char
and unsigned char
are distinct, but two of them have the same representation (and one of the two is char
).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With