I have a binary file that was created on a unix machine. It's just a bunch of records written one after another. The record is defined something like this:
struct RECORD {
UINT32 foo;
UINT32 bar;
CHAR fooword[11];
CHAR barword[11];
UNIT16 baz;
}
I am trying to figure out how I would read and interpret this data on a Windows machine. I have something like this:
fstream f;
f.open("file.bin", ios::in | ios::binary);
RECORD r;
f.read((char*)&detail, sizeof(RECORD));
cout << "fooword = " << r.fooword << endl;
I get a bunch of data, but it's not the data I expect. I'm suspect that my problem has to do with the endian difference of the machines, so I've come to ask about that.
I understand that multiple bytes will be stored in little-endian on windows and big-endian in a unix environment, and I get that. For two bytes, 0x1234 on windows will be 0x3412 on a unix system.
Does endianness affect the byte order of the struct as a whole, or of each individual member of the struct? What approaches would I take to convert a struct created on a unix system to one that has the same data on a windows system? Any links that are more in depth than the byte order of a couple bytes would be great, too!
But, in networking, Big Endian is used as the standard for the exchange of data between networks. Therefore, Little Endian machines need to convert their data to Big Endian while sending data through a network. Similarly, Little Endian machines need to swap the byte ordering when they receive data from a network.
Big-endian is an order in which the "big end" (most significant value in the sequence) is stored first, at the lowest storage address. Little-endian is an order in which the "little end" (least significant value in the sequence) is stored first.
Endianess shouldn't have an effect. The offset of the first element of a struct should always be zero. The offset of every next element should be larger than that of its predecessor.
The advantages of Little Endian are: It's easy to read the value in a variety of type sizes. For example, the variable A = 0x13 in 64-bit value in memory at the address B will be 1300 0000 0000 0000 . A will always be read as 19 regardless of using 8, 16, 32, 64-bit reads.
As well as the endian, you need to be aware of padding differences between the two platforms. Particularly if you have odd length char arrays and 16 bit values, you may well find different numbers of pad bytes between some elements.
Edit: if the structure was written out with no packing, then it should be fairly straightforward. Something like this (untested) code should do the job:
// Functions to swap the endian of 16 and 32 bit values
inline void SwapEndian(UINT16 &val)
{
val = (val<<8) | (val>>8);
}
inline void SwapEndian(UINT32 &val)
{
val = (val<<24) | ((val<<8) & 0x00ff0000) |
((val>>8) & 0x0000ff00) | (val>>24);
}
Then, once you've loaded the struct, just swap each element:
SwapEndian(r.foo);
SwapEndian(r.bar);
SwapEndian(r.baz);
Actually, endianness is a property of the underlying hardware, not the OS.
The best solution is to convert to a standard when writing the data -- Google for "network byte order" and you should find the methods to do this.
Edit: here's the link: http://www.gnu.org/software/hello/manual/libc/Byte-Order.html
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With