I've recently decided to make the leap and read through a bunch of computer science books to better equip myself for the future.
At the moment I'm reading through converting signed to unsigned decimals. I understand the majority if it (hopefully it becomes easier to eventually), but struggling with the following (in 32-bit):
-2147483647-1U < -2147483647
According the book, this evaluates to true. There's a bit about this that I'm still struggling with as I cant see why it evaluates to this.
With my understanding, I know that they are both converted to unsigned values in this calculation due to the first number being cast as unsigned. The first number is therefore -2147483648 after subtraction and then converted to unsigned, or does that unsigned conversion happen prior to the subtraction?
Sorry for the lengthy post, just trying to get my head around understanding this.
Thanks!
The first number is therefore -2147483648 after subtraction
Not quite. With -2147483647-1U
, the conversion to unsigned
happens first. With mixed int/unsigned
math, the int
is converted to an unsigned
.
Subtracting unsigned
form an int
results in an unsigned
and an unsigned
is never negative.
-2147483647-1U < -2147483647
Assume 32-bit or wider unsigned/int
-2147483647-1U
is a int minus an unsigned, so -2147483647
is converted to unsigned 2147483649
and the difference is unsigned 2147483648
. Now an unsigned is compare to an int
, so the int
is converted to unsigned 2147483649
. The left is less than the right, so the result is true.
[Edit]
Assume narrower than 32-bit unsigned/int
, yet long
uses the common 2's complement encoding. Often seen in embedded 8/16-bit processors in 2017.
-2147483647-1U
is a long minus a narrower unsigned, so -2147483647
remains a long
and 1U
is converted to int 1
and the difference is long -2147483648
. Now an long
is compare to an long
. The left is less than the right, so the result is true.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With