Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Are there any advantages to network byte order in a new protocol?

(I know many people are going to be tempted to close this question; please don't; I'm asking for concrete technical answers, if any exist.)

"Network byte order" is big-endian for reasons that cannot be asked on stackoverflow. Lots of old protocols use that order and can't be changed but I wonder if there are any technical reasons to choose big endian for a new protocol.

I would think little endian is better, because 99.99% of processors in use are little endian (ARM can technically do both, but in reality it is always set to little endian). So I was surprised to see that CBOR, a relatively recent protocol, uses big endian. Is there an advantage that I haven't thought of?

like image 523
Timmmm Avatar asked Oct 24 '25 15:10

Timmmm


1 Answers

It boils down to human factors: It is easier to read a multi-byte integer in a hex dump if it is encoded with the most significant byte(s) first. For example, the CBOR representation of 0x1234 (4,660) is the byte sequence 19 12 34. If you are looking for the value 0x1234, it is easier to spot it that way.

TLDR;

I've been in the field for over 40 years now, so there's a lot of history behind this. Even the definition of a "byte" has changed over that many years, so this may take a bit of an open mind to understand how this evolved.

Dumps of binary information weren't always in bytes, nor hexadecimal. For example, the PDP-11 (with 16-bit words, and 8-bit bytes) the use of octal notation word-wide dumps was common. This was useful because of the machine architecture, which inculuded 8 registers, and 8 addressing modes, so machine langugage dumps in octal were easier to decode than hex.

like image 181
Burt_Harris Avatar answered Oct 27 '25 02:10

Burt_Harris



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!