Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why does std::bitset only support integral data types? Why is float not supported?

Tags:

c++

std-bitset

On trying to generate the bit pattern of a float as follows:

std::cout << std::bitset<32>(32.5) << std::endl;

the compiler generates this warning:

warning: implicit conversion from 'double' to 'unsigned long long' changes value
  from 32.5 to 32 [-Wliteral-conversion]
 std::cout << std::bitset<32>(32.5) << std::endl;

Output on ignoring warning :) :

00000000000000000000000000100000

Why cannot bitset detect floats and correctly output bit sequence, when casting to char* and walking memory does show correct sequence? This works, but is machine dependent on byte ordering and mostly unreadable:

template <typename T>
  void printMemory(const T& data) {
    const char* begin = reinterpret_cast<const char*>(&data);
    const char* end = begin + sizeof(data);
    while(begin != end)
      std::cout << std::bitset<CHAR_BIT>(*begin++) << " ";
    std::cout << std::endl;
}

Output:

00000000 00000000 00000010 01000010 

Is there a reason not to support floats? Is there an alternative for floats?

like image 740
cedoc Avatar asked Feb 10 '26 19:02

cedoc


1 Answers

What would you expect to appear in your bitset if you supplied a float? Presumably some sort of representation of an IEEE-7545 binary32 floating point number in big-endian format? What about platforms that don't represent their floats in a way that's even remotely similar to that? Should the implementation bend over backwards to (probably lossily) convert the float supplied to what you want?

The reason it doesn't is that there is no standard defined format for floats. They don't even have to be 32 bits. They just usually are on most platforms.

C++ and C will run on very tiny and/or odd platforms. The standard can't count on what's 'usually the case'. There were/are C/C++ compilers for 8/16 bit 6502 systems who's sorry excuse for a native floating point format was (I think) a 6-byte entity that used packed BCD encoding.

This is the same reason that signed integers are also unsupported. Two's complement is not universal, just almost universal. :-)

like image 105
Omnifarious Avatar answered Feb 18 '26 14:02

Omnifarious



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!