I've been working on a physics simulations requiring the generation of a large amount of random numbers (at least 10^13 if you want an idea). I've been using the C++11 implementation of the Mersenne twister. I've also read that GPU implementation of this same algorithm are now a part of Cuda libraries and that GPU can be extremely efficient at this task; but I couldn't find explicit numbers or a benchmark comparison. For example compared to an 8 cores i7, are Nvidia cards of the last generations more performant in generating random numbers? If yes, how much and in which price range?
I'm thinking that my simulation could gain from having a GPU generating a huge pile of random numbers and the CPU doing the rest.
You can think of the seed as a snippet of randomness. However, the Mersenne Twister has since replaced linear congruential generators and is still the most popular PRNG algorithm used today. While pseudorandom numbers aren't random in truth, there are many reasons why they're used.
Surprisingly, the general-purpose random number generators that are in most widespread use are easily predicted. (In contrast RNGs used to construct stream ciphers for secure communication are believed to be infeasible to predict, and are known as cryptographically secure).
Physical methods The earliest methods for generating random numbers, such as dice, coin flipping and roulette wheels, are still used today, mainly in games and gambling as they tend to be too slow for most applications in statistics and cryptography.
A true random number generator (TRNG), also known as a hardware random number generator (HRNG), does not use a computer algorithm. Instead, it uses an external unpredictable physical variable such as radioactive decay of isotopes or airwave static to generate random numbers.
Some comparisons can be found here: https://developer.nvidia.com/cuRAND
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With