Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why can't I reinterpret_cast uint to int?

Tags:

Here's what I want to do:

const int64_t randomIntNumber = reinterpret_cast<int64_t> (randomUintNumber);

Where randomUintNumber is of type uint64_t.

The error is (MSVC 2010):

error C2440: 'reinterpret_cast' : cannot convert from 'const uint64_t' to 'int64_t' 1> Conversion is a valid standard conversion, which can be performed implicitly or by use of static_cast, C-style cast or function-style cast

Why doesn't it compile? both types have the same bit length, isn't it what reinterpret_cast is intended for?

like image 297
Violet Giraffe Avatar asked Jan 31 '13 10:01

Violet Giraffe


People also ask

What does Reinterpret_cast mean in C++?

reinterpret_cast is a type of casting operator used in C++. It is used to convert a pointer of some data type into a pointer of another data type, even if the data types before and after conversion are different. It does not check if the pointer type and data pointed by the pointer is same or not.

Is Reinterpret_cast safe?

The result of a reinterpret_cast cannot safely be used for anything other than being cast back to its original type. Other uses are, at best, nonportable. The reinterpret_cast operator cannot cast away the const , volatile , or __unaligned attributes.

Can Reinterpret_cast throw?

No. It is a purely compile-time construct. It is very dangerous, because it lets you get away with very wrong conversions.

What is the difference between static_cast and Reinterpret_cast?

static_cast only allows conversions like int to float or base class pointer to derived class pointer. reinterpret_cast allows anything, that's usually a dangerous thing and normally reinterpret_cast is rarely used, tipically to convert pointers to/from integers or to allow some kind of low level memory manipulation.


1 Answers

Because that's not what reinterpret_cast is for. All the permitted conversions with reinterpret_cast involve pointers or references, with the exception that an integer or enum type can be reinterpret_cast to itself. This is all defined in the standard, [expr.reinterpret.cast].

I'm not certain what you're trying to achieve here, but if you want randomIntNumber to have the same value as randomUintNumber, then do

const int64_t randomIntNumber = randomUintNumber; 

If that results in a compiler warning, or if you just want to be more explicit, then:

const int64_t randomIntNumber = static_cast<int64_t>(randomUintNumber); 

The result of the cast has the same value as the input if randomUintNumber is less than 263. Otherwise the result is implementation-defined, but I expect all known implementations that have int64_t will define it to do the obvious thing: the result is equivalent to the input modulo 264.


If you want randomIntNumber to have the same bit-pattern as randomUintNumber, then you can do this:

int64_t tmp; std::memcpy(&tmp, &randomUintNumber, sizeof(tmp)); const int64_t randomIntNumber = tmp; 

Since int64_t is guaranteed to use two's complement representation, you would hope that the implementation defines static_cast to have the same result as this for out-of-range values of uint64_t. But it's not actually guaranteed in the standard AFAIK.

Even if randomUintNumber is a compile-time constant, unfortunately here randomIntNumber is not a compile-time constant. But then, how "random" is a compile-time constant? ;-)

If you need to work around that, and you don't trust the implementation to be sensible about converting out-of-range unsigned values to signed types, then something like this:

const int64_t randomIntNumber =      randomUintNumber <= INT64_MAX ?          (int64_t) randomUintNumber :         (int64_t) (randomUintNumber - INT64_MAX - 1) + INT64_MIN; 

Now, I'm in favour of writing truly portable code where possible, but even so I think this verges on paranoia.


Btw, you might be tempted to write this:

const int64_t randomIntNumber = reinterpret_cast<int64_t&>(randomUintNumber); 

or equivalently:

const int64_t randomIntNumber = *reinterpret_cast<int64_t*>(&randomUintNumber); 

This isn't quite guaranteed to work, because although where they exist int64_t and uint64_t are guaranteed to be a signed type and an unsigned type of the same size, they aren't actually guaranteed to be the signed and unsigned versions of a standard integer type. So it is implementation-specific whether or not this code violates strict aliasing. Code that violates strict aliasing has undefined behavior. The following does not violate strict aliasing, and is OK provided that the bit pattern in randomUintNumber is a valid representation of a value of long long:

unsigned long long x = 0; const long long y = reinterpret_cast<long long &>(x); 

So on implementations where int64_t and uint64_t are typedefs for long long and unsigned long long, then my reinterpret_cast is OK. And as with the implementation-defined conversion of out-of-range values to signed types, you would expect that the sensible thing for implementations to do is to make them corresponding signed/unsigned types. So like the static_cast and the implicit conversion, you expect it to work in any sensible implementation but it is not actually guaranteed.

like image 82
Steve Jessop Avatar answered Oct 09 '22 19:10

Steve Jessop