Microsoft documentation on BinaryReader for ReadUnt32 (for example) states: Reads a 4-byte unsigned integer from the current stream using little-endian encoding. However, is this always correct, even on big-endian systems?
The storage format is little endian, namely 4 bytes (32bits) data is stored as: d7-d0, d15-d8, d23-d16, d31-d24; double bytes (16bits) data is stored as: d7-d0 , d15-d8.
Big-endian is an order in which the "big end" (most significant value in the sequence) is stored first, at the lowest storage address. Little-endian is an order in which the "little end" (least significant value in the sequence) is stored first.
Little-endian is the default memory format for ARM processors. In little-endian format, the byte with the lowest address in a word is the least-significant byte of the word.
If it is little-endian, it would be stored as “01 00 00 00”. The program checks the first byte by dereferencing the cptr pointer. If it equals to 0, it means the processor is big-endian(“00 00 00 01”), If it equals to 1, it means the processor is little-endian (“01 00 00 00”).
The documentation is certainly a hint that implementors on other platforms should use little-endian encoding, and Mono seems to respect this:
public virtual uint ReadUInt32() {
FillBuffer(4);
return((uint) (m_buffer[0] |
(m_buffer[1] << 8) |
(m_buffer[2] << 16) |
(m_buffer[3] << 24)));
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With