I'm reading "Learning Core Audio: A Hands-On Guide to Audio Programming for Mac and iOS" by Chris Adamson and at one point the author describes big-endian as:
the high bits of a byte or word are numerically more significant than the lower ones.
However, until now I though the problem of big-little endian only applies to byte order and not bit order. One byte has the same bit order (left to right) no matter if we're talking about little endian or big endian systems. Am I wrong? Is the author wrong? Or did I misunderstood his point?
In computing, endianness is the order or sequence of bytes of a word of digital data in computer memory.
Bit order usually follows the same endianness as the byte order for a given computer system. That is, in a big endian system the most significant bit is stored at the lowest bit address; in a little endian system, the least significant bit is stored at the lowest bit address.
Bits within a byte are commonly numbered as Bit0 for the least significant bit and Bit7 for the most significant bit. Thus, bit numbering in a 32-bit integer will be left-to-right order in big-endian, and right-to-left in little-endian.
The TCP/IP standard network byte order is big-endian. In order to participate in a TCP/IP network, little-endian systems usually bear the burden of conversion to network byte order.
Since you can't normally address the bits within a byte individually, there's no concept of "bit endianness" generally.
The only sense in which there is such a thing as "bit order" is the order in which bits are assigned to bitfields. For instance, in:
union { struct { unsigned char a:4; unsigned char b:4; } bf; unsigned char c; };
depending on the implementation, the representation of bf.a
could occupy the high four bits of c
, or the low four bits of c
. Whether the ordering of bitfield members matches the byte order is implementation-defined.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With