How to convert float to byte array of length 4 (array of char*) ? I need to send over network some data, tcp, and need to send float as a byte array. ( I know precision to two decimal digits, so at the moment I on client side multiply by 100 and on server divide by 100 - basically convert to integer and then find bytes with & 0xff << operations). But it is ugly and can lost precision during time.
Reading any type as a sequence of bytes is quite simple:
float f = 0.5f;
unsigned char const * p = reinterpret_cast<unsigned char const *>(&f);
for (std::size_t i = 0; i != sizeof(float); ++i)
{
std::printf("The byte #%zu is 0x%02X\n", i, p[i]);
}
Writing to a float from a network stream works similarly, only you'd leave out the const
.
It is always permitted to reinterpret any object as a sequence of bytes (any char
-type is permissible), and this expressly not an aliasing violation. Note that the binary representation of any type is of course platform dependent, so you should only use this for serialization if the recipient has the same platform.
The first thing you have to do is to determine the format of the float in the network protocol. Just knowing that it is 4 bytes doesn't tell you much: IBM mainframe, Oracle Sparc and the usual PC all have four byte floats, but they have three different formats. Once you know the format, depending on it and your portability requirements, two different strategies can be used:
If the format in the protocol is IEEE (the most frequent case),
and you don't have to be portable to machines which aren't IEEE
(Windows and most Unix are IEEE—most mainframes aren't),
then you can use type punning to convert the float to
a uint32_t
, and output that, using either:
std::ostream&
output32BitUInt( std::ostream& dest, uint32_t value )
{
dest.put( (value >> 24) & 0xFF );
dest.put( (value >> 16) & 0xFF );
dest.put( (value >> 8) & 0xFF );
dest.put( (value ) & 0xFF );
}
for big-endian (the usual network order), or:
std::ostream&
output32BitUInt( std::ostream& dest, uint32_t value )
{
dest.put( (value ) & 0xFF );
dest.put( (value >> 8) & 0xFF );
dest.put( (value >> 16) & 0xFF );
dest.put( (value >> 24) & 0xFF );
}
for little-endian (used by some protocols). Which one you use will depend on the format defined for the protocol.
To convert from float
to uint32_t
, you'll have to check your
compiler. Using memcpy
is the only method fully guaranteed by
the standard; the intent is that using
a reinterpret_cast<uint32_t&>
on the float work as well, and
most (all?) compiler also support using a union
.
If you need to be portable to mainframes as well, or the format is something other than IEEE, then you'll need to extract exponent, sign and mantissa from the float, and output each in the target format. Something like the following should work to output IEEE big-endian on any machine (including mainframes which don't use IEEE), and should give you some idea:
oxdrstream&
oxdrstream::operator<<(
float source )
{
BytePutter dest( *this ) ;
bool isNeg = source < 0 ;
if ( isNeg ) {
source = - source ;
}
int exp ;
if ( source == 0.0 ) {
exp = 0 ;
} else {
source = ldexp( frexp( source, &exp ), 24 ) ;
exp += 126 ;
}
uint32_t mant = source ;
dest.put( (isNeg ? 0x80 : 0x00) | exp >> 1 ) ;
dest.put( ((exp << 7) & 0x80) | ((mant >> 16) & 0x7F) ) ;
dest.put( mant >> 8 ) ;
dest.put( mant ) ;
return *this ;
}
(BytePutter
is a simple class which takes care of the usual
boilerplate and does error checking.) Of course, the various
manipulations for the output will be different if the output
format is not IEEE, but this should show the basic principles.
(If you need portability to some of the more exotic mainframes,
which don't support uint32_t
, you can replace it with any
unsigned integral type which is larger than 23 bits.)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With