Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

C++ explain casting uint64 to uint32

I'm trying to cast a uint64_t (representing time in nanoseconds from D-day using a boost chrono high precision clock) to a uint32_t in order to seed a random number generator.

I just want the least significant 32 bits of the uint64_t. Here is my attempt:

uint64_t ticks64 = dtn.count(); // This has the ticks in nanosec
uint64_t ticks32_manual = ticks64 & 0xFFFFFFFF;
uint32_t ticks32_auto = (uint32_t) ticks64;
mexPrintf("Periods: %llu\n", ticks64);
mexPrintf("32-bit manual truncation: %llu\n", ticks32_manual);
mexPrintf("32-bit automatic truncation: %u\n", ticks32_auto);

The output of my code is as follows:

Periods: 651444791362198

32-bit manual truncation: 1331774102

32-bit automatic truncation: 1331774102

I was expecting the last few digits of the 32 and original 64-bit representations to be the same, but they are not. That is, I thought I would "lose the left half" of the 64-bit number.

Can anyone explain what's going on here? Thanks.

Btw, I've seen this link.

like image 665
Salmonstrikes Avatar asked Apr 02 '15 03:04

Salmonstrikes


1 Answers

As pointed out in the comments there's nothing wrong with the operation of your code, it's just that you're not visualizing the output correctly. Here's your code, corrected and runnable:

#include <cstdio>
#include <cstdint>

int main() {
  uint64_t ticks64 = 651444791362198llu;
  uint64_t ticks32_manual = ticks64 & 0xFFFFFFFF;
  uint32_t ticks32_auto = (uint32_t) ticks64;

  printf("Periods: %llX\n", ticks64);
  printf("32-bit manual truncation: %llX\n", ticks32_manual);
  printf("32-bit automatic truncation: %X\n", ticks32_auto);
}

And the output is:

Periods: 2507C4F614296
32-bit manual truncation: 4F614296
32-bit automatic truncation: 4F614296
like image 166
Andy Brown Avatar answered Oct 24 '22 03:10

Andy Brown