I'm trying to cast a uint64_t (representing time in nanoseconds from D-day using a boost chrono high precision clock) to a uint32_t in order to seed a random number generator.
I just want the least significant 32 bits of the uint64_t. Here is my attempt:
uint64_t ticks64 = dtn.count(); // This has the ticks in nanosec
uint64_t ticks32_manual = ticks64 & 0xFFFFFFFF;
uint32_t ticks32_auto = (uint32_t) ticks64;
mexPrintf("Periods: %llu\n", ticks64);
mexPrintf("32-bit manual truncation: %llu\n", ticks32_manual);
mexPrintf("32-bit automatic truncation: %u\n", ticks32_auto);
The output of my code is as follows:
Periods: 651444791362198
32-bit manual truncation: 1331774102
32-bit automatic truncation: 1331774102
I was expecting the last few digits of the 32 and original 64-bit representations to be the same, but they are not. That is, I thought I would "lose the left half" of the 64-bit number.
Can anyone explain what's going on here? Thanks.
Btw, I've seen this link.
As pointed out in the comments there's nothing wrong with the operation of your code, it's just that you're not visualizing the output correctly. Here's your code, corrected and runnable:
#include <cstdio>
#include <cstdint>
int main() {
uint64_t ticks64 = 651444791362198llu;
uint64_t ticks32_manual = ticks64 & 0xFFFFFFFF;
uint32_t ticks32_auto = (uint32_t) ticks64;
printf("Periods: %llX\n", ticks64);
printf("32-bit manual truncation: %llX\n", ticks32_manual);
printf("32-bit automatic truncation: %X\n", ticks32_auto);
}
And the output is:
Periods: 2507C4F614296
32-bit manual truncation: 4F614296
32-bit automatic truncation: 4F614296
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With