I did some googling and couldn't find any good article on this question. What should I watch out for when implementing an app that I want to be endian-agnostic?
Endianness. The attribute of a system that indicates whether integers are represented with the most significant byte stored at the lowest address (big endian) or at the highest address (little endian). Each address stores one element of the memory array.
Since you can't normally address the bits within a byte individually, there's no concept of "bit endianness" generally.
Big-endian is an order in which the "big end" (most significant value in the sequence) is stored first, at the lowest storage address. Little-endian is an order in which the "little end" (least significant value in the sequence) is stored first.
endian or endianness refers to how bytes are ordered within computer memory. If we want to represent a two-byte hex number, say c23g, we'll store it in two sequential bytes c2 followed by 3g. It seems that it's the right way of doing it. This number, stored with the big end first, is called big-endian.
The only time you have to care about endianness is when you're transferring endian-sensitive binary data (that is, not text) between systems that might not have the same endianness. The normal solution is to use "network byte order" (AKA big-endian) to transfer data, and then swizzle the bytes if necessary on the other end.
To convert from host to network byte order, use htons(3)
and htonl(3)
. To convert back, use ntohl(3)
and ntohs(3)
. Check out the man page for everything you need to know. For 64-bit data, this question and answer will be helpful.
What should I watch out for when implementing an app that I want to be endian-agnostic?
You first have to recognize when endian becomes an issue. And it mostly becomes an issue when you have to read or write data from somewhere external, be it reading data from a file or doing network communication between computers.
In such cases, endianess matters for integers bigger than a byte, as integers are represented differently in memory by different platforms. This means every time you need to read or write external data, you need to do more than just dumping the memory of your program, or read data directly into your own variables.
e.g. if you have this snippet of code:
unsigned int var = ...;
write(fd, &var, sizeof var);
You're directly writing out the memory content of var
, which means the data gets presented to wherever this data goes just as it is represented in your own computer' memory.
If you write this data to a file, the file content will be different whether you run the program on a big endian or a little endian machine. So that code is not endian agnostic, and you'd want to avoid doing things like this.
Instead focus on the data format. When reading/writing data, always decide the data format first, and then write the code to handle it. This might already have been decided for you if you need to read some existing well defined file format or implement an existing network protocol.
Once you know the data format, instead of e.g. dumping out an int variable directly, your code does this:
uint32_t i = ...;
uint8_t buf[4];
buf[0] = (i&0xff000000) >> 24;
buf[1] = (i&0x00ff0000) >> 16;
buf[2] = (i&0x0000ff00) >> 8;
buf[3] = (i&0x000000ff);
write(fd, buf, sizeof buf);
We've now picked the most significant byte and placed it as the first byte in a buffer, and the least significant byte placed at the end of the buffer. That integer is represented in big endian format in buf
, regardless of the endian of the host - so this code is endian agnostic.
The consumer of this data must know that the data is represented in a big endian format. And regardless of the host the program runs on, this code would read that data just fine:
uint32_t i;
uint8_t buf[4];
read(fd, buf, sizeof buf);
i = (uint32_t)buf[0] << 24;
i |= (uint32_t)buf[1] << 16;
i |= (uint32_t)buf[2] << 8;
i |= (uint32_t)buf[3];
Conversely, if the data you need to read is known to be in little endian format, the endianess agnostic code would just do
uint32_t i ;
uint8_t buf[4];
read(fd, buf, sizeof buf);
i = (uint32_t)buf[3] << 24;
i |= (uint32_t)buf[2] << 16;
i |= (uint32_t)buf[1] << 8;
i |= (uint32_t)buf[0];
You can makes some nice inline functions or macros to wrap and unwrap all 2,4,8 byte integer types you need, and if you use those and care about the data format and not the endian of the processor you run on, your code will not depend on the endianess it's running on.
This is more code than many other solutions, I've yet to write a program where this extra work has had any meaningful impact on performance, even when shuffeling 1Gbps+ of data around.
It also avoids misaligned memory access which you can easily get with an approach of e.g.
uint32_t i;
uint8_t buf[4];
read(fd, buf, sizeof buf);
i = ntohl(*(uint32_t)buf));
which can also incur a performance hit (insignificant on some, many many orders of magnitude on others) at best, and a crash at worse on platforms that can't do unaligned access to integers.
This might be a good article for you to read: The byte order fallacy
The byte order of the computer doesn't matter much at all except to compiler writers and the like, who fuss over allocation of bytes of memory mapped to register pieces. Chances are you're not a compiler writer, so the computer's byte order shouldn't matter to you one bit.
Notice the phrase "computer's byte order". What does matter is the byte order of a peripheral or encoded data stream, but--and this is the key point--the byte order of the computer doing the processing is irrelevant to the processing of the data itself. If the data stream encodes values with byte order B, then the algorithm to decode the value on computer with byte order C should be about B, not about the relationship between B and C.
Several answers have covered file IO, which is certainly the most common endian concern. I'll touch on one not-yet-mentioned: Unions.
The following union is a common tool in SIMD/SSE programming, and is not endian-friendly:
union uint128_t {
_m128i dq;
uint64_t dd[2];
uint32_t dw[4];
uint16_t dh[8];
uint8_t db[16];
};
Any code accessing the dd/dw/dh/db forms will be doing so in endian-specific fashion. On 32-bit CPUs it is also somewhat common to see simpler unions that allow more easily breaking 64-bit arithmetic into 32-bit portions:
union u64_parts {
uint64_t dd;
uint32_t dw[2];
};
Since in this usage case is it rare (if ever) that you want to iterate over each element of the union, I prefer to write such unions as this:
union u64_parts {
uint64_t dd;
struct {
#ifdef BIG_ENDIAN
uint32_t dw2, dw1;
#else
uint32_t dw1, dw2;
#endif
}
};
The result is implicit endian-swapping for any code that accesses dw1/dw2 directly. The same design approach can be used for the 128-bit SIMD datatype above as well, though it ends up being considerably more verbose.
Disclaimer: Union use is often frowned upon because of the loose standards definitions regarding structure padding and alignment. I find unions very useful and have used them extensively, and I haven't run into any cross-compatibility issues in a very long time (15+ yrs). Union padding/alignment will behave in an expected and consistent fashion for any current compiler targeting x86, ARM, or PowerPC.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With