Why did some processor manifacturers decide to use
?
I've heard that with big endian one can find out faster, if a number is negative or positive, because that bit is the first one. (This doesn't matter on modern CPUs, as individual bit can't be accessed anymore.)
Big-endian is an order in which the "big end" (most significant value in the sequence) is stored first, at the lowest storage address. Little-endian is an order in which the "little end" (least significant value in the sequence) is stored first.
By far the most common ordering of multiple bytes in one number is the little-endian, which is used on all Intel processors.
So knowledge of endianness is important when you are reading and writing the data across the network from one system to another. If the sender and receiver computer have different endianness, then the receiver system would not receive the actual data transmitted by the sender.
Broadly speaking, the endianness in use is determined by the CPU. Because there are a number of options, it is unsurprising that different semiconductor vendors have chosen different endianness for their CPUs.
The benefit of little endianness is that a variable can be read as any length using the same address.
For example a 32 bit variable can be read as an 8 bit or 16 bit variable without changing the address. This may have limited benefit these days, but in the days of assembler and limited memory it could be a significant advantage
There is no particular benefit of big or little endian as such, except using native CPU endianness or handling specified file endianness.
The reason why both big and little endian coexist is that different CPU makers used different conventions for representing multibyte data, and no standard emerged at the time.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With