Why is this program showing the following output ?
#include <bitset>
...
{
std::bitset<8> b1(01100100); std::cout<<b1<<std::endl;
std::bitset<8> b2(11111111); std::cout<<b2<<std::endl; //see, this variable
//has been assigned
//the value 11111111
//whereas, during
//execution, it takes
//the value 11000111
std::cout << "b1 & b2: " << (b1 & b2) << '\n';
std::cout << "b1 | b2: " << (b1 | b2) << '\n';
std::cout << "b1 ^ b2: " << (b1 ^ b2) << '\n';
}
This is the OUTPUT:
01000000
11000111
b1 & b2: 01000000
b1 | b2: 11000111
b1 ^ b2: 10000111
First, I thought there is something wrong with the header file (I was using MinGW) so I checked using MSVCC. But it too showed the same thing. Please help.
Despite the appearance, the 11111111 is decimal. The binary representation of 1111111110 is 1010100110001010110001112. Upon construction, std::bitset<8> takes the eight least significant bits of that: 110001112.
The first case is similar except the 01100100 is octal (due to the leading zero). The same number expressed in binary is 10010000000010000002.
One way to represent a bitset with a value of 111111112 is std::bitset<8> b1(0xff).
Alternatively, you can construct a bitset from a binary string:
std::bitset<8> b1(std::string("01100100"));
std::bitset<8> b2(std::string("11111111"));
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With