Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Can I use a binary literal in C or C++?

Tags:

c++

c

binary

People also ask

Does C support binary literal?

Binary integer literals (0b…) are supported in the C++ language but not in the C language.

What is a literal in binary?

A binary literal is a number that is represented in 0s and 1s (binary digits). Java allows you to express integral types (byte, short, int, and long) in a binary number system. To specify a binary literal, add the prefix 0b or 0B to the integral value.

How do you write binary literal in C++?

You can now use binary literals directly in C++14 code like this: //C++14 int x= 0b11111100; if (var==0b101) //... switch (binliteral) { case 0B100: //... break; case 0B101: //... break; //... } A binary literal begins with the prefix 0b or 0B followed by a sequence of one or more binary digits, namely 0 and 1.

How do you write C in binary?

You need to factor in the usage of lowercase letters as well: a: 01100001. b: 01100010. c: 01100011.


If you are using GCC then you can use a GCC extension (which is included in the C++14 standard) for this:

int x = 0b00010000;

You can use binary literals. They are standardized in C++14. For example,

int x = 0b11000;

Support in GCC

Support in GCC began in GCC 4.3 (see https://gcc.gnu.org/gcc-4.3/changes.html) as extensions to the C language family (see https://gcc.gnu.org/onlinedocs/gcc/C-Extensions.html#C-Extensions), but since GCC 4.9 it is now recognized as either a C++14 feature or an extension (see Difference between GCC binary literals and C++14 ones?)

Support in Visual Studio

Support in Visual Studio started in Visual Studio 2015 Preview (see https://www.visualstudio.com/news/vs2015-preview-vs#C++).


template<unsigned long N>
struct bin {
    enum { value = (N%10)+2*bin<N/10>::value };
} ;

template<>
struct bin<0> {
    enum { value = 0 };
} ;

// ...
    std::cout << bin<1000>::value << '\n';

The leftmost digit of the literal still has to be 1, but nonetheless.


You can use BOOST_BINARY while waiting for C++0x. :) BOOST_BINARY arguably has an advantage over template implementation insofar as it can be used in C programs as well (it is 100% preprocessor-driven.)

To do the converse (i.e. print out a number in binary form), you can use the non-portable itoa function, or implement your own.

Unfortunately you cannot do base 2 formatting with STL streams (since setbase will only honour bases 8, 10 and 16), but you can use either a std::string version of itoa, or (the more concise, yet marginally less efficient) std::bitset.

#include <boost/utility/binary.hpp>
#include <stdio.h>
#include <stdlib.h>
#include <bitset>
#include <iostream>
#include <iomanip>

using namespace std;

int main() {
  unsigned short b = BOOST_BINARY( 10010 );
  char buf[sizeof(b)*8+1];
  printf("hex: %04x, dec: %u, oct: %06o, bin: %16s\n", b, b, b, itoa(b, buf, 2));
  cout << setfill('0') <<
    "hex: " << hex << setw(4) << b << ", " <<
    "dec: " << dec << b << ", " <<
    "oct: " << oct << setw(6) << b << ", " <<
    "bin: " << bitset< 16 >(b) << endl;
  return 0;
}

produces:

hex: 0012, dec: 18, oct: 000022, bin:            10010
hex: 0012, dec: 18, oct: 000022, bin: 0000000000010010

Also read Herb Sutter's The String Formatters of Manor Farm for an interesting discussion.


A few compilers (usually the ones for microcontrollers) has a special feature implemented within recognizing literal binary numbers by prefix "0b..." preceding the number, although most compilers (C/C++ standards) don't have such feature and if it is the case, here it is my alternative solution:

#define B_0000    0
#define B_0001    1
#define B_0010    2
#define B_0011    3
#define B_0100    4
#define B_0101    5
#define B_0110    6
#define B_0111    7
#define B_1000    8
#define B_1001    9
#define B_1010    a
#define B_1011    b
#define B_1100    c
#define B_1101    d
#define B_1110    e
#define B_1111    f

#define _B2H(bits)    B_##bits
#define B2H(bits)    _B2H(bits)
#define _HEX(n)        0x##n
#define HEX(n)        _HEX(n)
#define _CCAT(a,b)    a##b
#define CCAT(a,b)   _CCAT(a,b)

#define BYTE(a,b)        HEX( CCAT(B2H(a),B2H(b)) )
#define WORD(a,b,c,d)    HEX( CCAT(CCAT(B2H(a),B2H(b)),CCAT(B2H(c),B2H(d))) )
#define DWORD(a,b,c,d,e,f,g,h)    HEX( CCAT(CCAT(CCAT(B2H(a),B2H(b)),CCAT(B2H(c),B2H(d))),CCAT(CCAT(B2H(e),B2H(f)),CCAT(B2H(g),B2H(h)))) )

// Using example
char b = BYTE(0100,0001); // Equivalent to b = 65; or b = 'A'; or b = 0x41;
unsigned int w = WORD(1101,1111,0100,0011); // Equivalent to w = 57155; or w = 0xdf43;
unsigned long int dw = DWORD(1101,1111,0100,0011,1111,1101,0010,1000); //Equivalent to dw = 3745774888; or dw = 0xdf43fd28;

Disadvantages (it's not such a big ones):

  • The binary numbers have to be grouped 4 by 4;
  • The binary literals have to be only unsigned integer numbers;

Advantages:

  • Total preprocessor driven, not spending processor time in pointless operations (like "?.. :..", "<<", "+") to the executable program (it may be performed hundred of times in the final application);
  • It works "mainly in C" compilers and C++ as well (template+enum solution works only in C++ compilers);
  • It has only the limitation of "longness" for expressing "literal constant" values. There would have been earlyish longness limitation (usually 8 bits: 0-255) if one had expressed constant values by parsing resolve of "enum solution" (usually 255 = reach enum definition limit), differently, "literal constant" limitations, in the compiler allows greater numbers;
  • Some other solutions demand exaggerated number of constant definitions (too many defines in my opinion) including long or several header files (in most cases not easily readable and understandable, and make the project become unnecessarily confused and extended, like that using "BOOST_BINARY()");
  • Simplicity of the solution: easily readable, understandable and adjustable for other cases (could be extended for grouping 8 by 8 too);

This thread may help.

/* Helper macros */
#define HEX__(n) 0x##n##LU
#define B8__(x) ((x&0x0000000FLU)?1:0) \
+((x&0x000000F0LU)?2:0) \
+((x&0x00000F00LU)?4:0) \
+((x&0x0000F000LU)?8:0) \
+((x&0x000F0000LU)?16:0) \
+((x&0x00F00000LU)?32:0) \
+((x&0x0F000000LU)?64:0) \
+((x&0xF0000000LU)?128:0)

/* User macros */
#define B8(d) ((unsigned char)B8__(HEX__(d)))
#define B16(dmsb,dlsb) (((unsigned short)B8(dmsb)<<8) \
+ B8(dlsb))
#define B32(dmsb,db2,db3,dlsb) (((unsigned long)B8(dmsb)<<24) \
+ ((unsigned long)B8(db2)<<16) \
+ ((unsigned long)B8(db3)<<8) \
+ B8(dlsb))


#include <stdio.h>

int main(void)
{
    // 261, evaluated at compile-time
    unsigned const number = B16(00000001,00000101);

    printf("%d \n", number);
    return 0;
}

It works! (All the credits go to Tom Torfs.)