/dev/random and /dev/urandom use environmental noise to generate randomness.
With a virtualised server there can be multiple instances of an Operating System on one hardware configuration. These operating systems will all be sourcing their randomness from the same environmental noise.
Does this mean as a group the random number generators strength is reduced as all OS instances are basing their calculations of the same input? Or, is the environmental noise partitioned out so that sharing doesn't occur?
If the latter is true, I can see this reducing the effectiveness of /dev/urandom because it reuses its internal pool and with less environmental input, reduces entropy.
/dev/random should be ok because it blocks until enough noise is acquired... unless of course the OS instances are all sharing the input.
So, the question: What is the impact of virtualisation on cryptographically strong random number generators, specifically those that use environmental noise?
Random number generators have applications in gambling, statistical sampling, computer simulation, cryptography, completely randomized design, and other areas where producing an unpredictable result is desirable.
In computing, entropy is the randomness collected by an operating system or application for use in cryptography or other uses that require random data.
Because the outcome of quantum-mechanical events cannot be predicted even in principle, they are the 'gold standard' for random number generation. Some quantum phenomena used for random number generation include: Shot noise, a quantum mechanical noise source in electronic circuits.
They are commonly used to estimate unknown ratios and areas. In the figure above Monte Carlo is applied to estimate the value of pi. Monte Carlo simulation methods do not always require truly random numbers to be effective. Often, deterministic pseudorandom sequences making it easy to test and re-run simulations [9].
I couldn't find any references quickly, but it would seem to me that the entropy is derived from the kernel data structures for the devices, not the actual devices themselves. Since these would be independent regardless of virtualization, I suspect the answer is not much.
[EDIT] After peeking at the kernel source (actually patch history), it looks like Linux, at least, gathers entropy from keyboard presses, mouse activity, interrupt timing (but not all interrupts), and block device request finishing times. On a virtualized system, I suspect that mouse/keyboard events would be pretty low and thus not contribute to the entropy gathered. Presumably this would be offset by additional network I/O interrupt activity, but it's not clear. In this respect, I don't think it differs much from non-VM server.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With