Number 4 represented as a 32-bit unsigned integer would be
on a big endian machine: 00000000 00000000 00000000 00000100 (most significant byte first)
on a small endian machine: 00000100 00000000 00000000 00000000 (most significant byte last)
As a 8-bit unsigned integer it is represented as 00000100 on both machines.
Now when casting 8-bit uint to a 32-bit I always thought that on a big endian machine that means sticking 24 zeros in front of the existing byte, and appending 24 zeros to the end if the machine is little endian. However, someone pointed out that in both cases zeros are prepended rather than appended. But wouldn't it mean that on a little endian 00000100 will become the most significant byte, which will result in a very large number? Please explain where I am wrong.
Zeroes are prepended if you consider the mathematical value (which just happens to also be the big-endian representation).
Casts in C always strive to preserve the value, not representation. That's how, for example, (int)1.25 results(*note below) in 1, as opposed to something which makes much less sense.
As discussed in the comments, the same holds for bit-shifts (and other bitwise operations, for that matter). 50 >> 1 == 25, regardless of endianness.
(* note: usually, depends rounding mode for float->integer conversion)
In short: Operators in C operate on the mathematical value, regardless of representation. One exception is when you cast a pointer to the value (as in (char*)&foo), since then it is essentially a different "view" to the same data.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With