Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do I create a user-defined literal for a signed integer type?

Tags:

People also ask

How do you define a literal in C++?

Literals are data used for representing fixed values. They can be used directly in the code. For example: 1 , 2.5 , 'c' etc. Here, 1 , 2.5 and 'c' are literals.

What is a user-defined literal?

In source code, any literal, whether user-defined or not, is essentially a sequence of alphanumeric characters, such as 101 , or 54.7 , or "hello" or true . The compiler interprets the sequence as an integer, float, const char* string, and so on.

What is integer literal in C?

Integer Literals An integer literal can be a decimal, octal, or hexadecimal constant. A prefix specifies the base or radix: 0x or 0X for hexadecimal, 0 for octal, and nothing for decimal. An integer literal can also have a suffix that is a combination of U and L, for unsigned and long, respectively.

What are the different type of integer literals give example?

They can be represented as: Decimal integer literals. Hexadecimal integer literals. Octal integer literals.


As I discovered from this post the parameter types allowed for a user-defined literal type are as follows:

const char*
unsigned long long int
long double
char
wchar_t
char16_t
char32_t
const char*, std::size_t
const wchar_t*, std::size_t
const char16_t*, std::size_t
const char32_t*, std::size_t

Well, the only signed integer I see in that list is char, which is too small. What if I wanted to do something like this:

str operator"" _i(int i) {
    return i*2;
}

Then when I write -1000_i I expect to get -2000. How do I do this?