Why are both little- and big-endian still in use today, after ~40 years of binary computer-science? Are there algorithms or storage formats that work better with one and much worse with the other? Wouldn't it be better if we all switched to one and stick with it?
Big-endian is an order in which the "big end" (most significant value in the sequence) is stored first, at the lowest storage address. Little-endian is an order in which the "little end" (least significant value in the sequence) is stored first.
The endianness convention consists of two different ways to decides the ordering of bytes when transferring a word data between register and memory. The first one is called Big-endian and the second one is called Little-endian. Intel x86 processor is little-endian, so most personal computers are little-endian.
Putting the least significant byte first allows the processor to get started on the add after having read only the first byte of an offset.
Going back to the Wikipedia article, the stated advantage of big-endian numbers is that the size of the number can be more easily estimated because the most significant digit comes first.
When adding two numbers (on paper or in a machine), you start with the least significant digits and work towards the most significant digits. (Same goes for many other operations).
On the Intel 8088, which had 16-bit registers but an 8-bit data bus, being little-endian allowed such instructions to start operation after the first memory cycle. (Of course it should be possible for the memory fetches of a word to be done in decreasing order rather than increasing but I suspect this would have complicated the design a little.)
On most processors the bus width matches the register width so this no longer confers an advantage.
Big-endian numbers, on the other hand, can be compared starting with the MSB (although many compare instructions actually do a subtract which needs to start with the LSB anyway). The sign bit is also very easy to get.
Are there algorithms or storage formats that work better with one and much worse with the other?
No. There are small advantages here and there but nothing major.
I actually think litte-endian is more natural and consistent: the significance of a bit is 2 ^ (bit_pos + 8 * byte_pos). Whereas with with big endian the significance of a bit is 2 ^ (bit_pos + 8 * (word_size - byte_pos - 1)).
Wouldn't it be better if we all switched to one and stick with it?
Due to the dominance of x86, we've definitely gravitated towards little-endian. The ARM chips in many mobile devices have configurable endianness but are often set to LE to be more compatible with the x86 world. Which is fine by me.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With