I'm trying to run several instances of a piece of code (2000 instances or so) concurrently in a computing cluster. The way it works is that I submit the jobs and the cluster will run them as nodes open up every so often, with several jobs per node. This seems to produce the same values for a good number of the instances in their random number generation, which uses a time-seed.
Is there a simple alternative I can use instead? Reproducibility and security are not important, quick generation of unique seeds is. What would be the simplest approach to this, and if possible a cross platform approach would be good.
The function srand () is used to provide seed for generating random numbers while rand () function generates the next random number in the sequence.
Computational random number generators can typically generate pseudorandom numbers much faster than physical generators, while physical generators can generate "true randomness."
The rdtsc
instruction is a pretty reliable (and random) seed.
In Windows it's accessible via the __rdtsc()
intrinsic.
In GNU C, it's accessible via:
unsigned long long rdtsc(){ unsigned int lo,hi; __asm__ __volatile__ ("rdtsc" : "=a" (lo), "=d" (hi)); return ((unsigned long long)hi << 32) | lo; }
The instruction measures the total pseudo-cycles since the processor was powered on. Given the high frequency of today's machines, it's extremely unlikely that two processors will return the same value even if they booted at the same time and are clocked at the same speed.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With