Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

/dev/random Extremely Slow?

Tags:

linux

terminal

Some background info: I was looking to run a script on a Red Hat server to read some data from /dev/random and use the Perl unpack() command to convert it to a hex string for usage later on (benchmarking database operations). I ran a few "head -1" on /dev/random and it seemed to be working out fine, but after calling it a few times, it would just kinda hang. After a few minutes, it would finally output a small block of text, then finish.

I switched to /dev/urandom (I really didn't want to, its slower and I don't need that quality of randomness) and it worked fine for the first two or three calls, then it too began hang. I was wondering if it was the "head" command that was bombing it, so I tried doing some simple I/O using Perl, and it too was hanging. As a last ditch effort, I used the "dd" command to dump some info out of it directly to a file instead of to the terminal. All I asked of it was 1mb of data, but it took 3 minutes to get ~400 bytes before I killed it.

I checked the process lists, CPU and memory were basically untouched. What exactly could cause /dev/random to crap out like this and what can I do to prevent/fix it in the future?

Edit: Thanks for the help guys! It seems that I had random and urandom mixed up. I've got the script up and running now. Looks like I learned something new today. :)

like image 393
Mr. Llama Avatar asked Jan 27 '11 16:01

Mr. Llama


People also ask

Why is Dev random so slow?

/dev/(u)random is a pseudorandom number generator. Predictability and repeatability are different properties. You just lack the necessary kernel controls to make them predictable and repeatable (and thanks God for that).

Is Dev random blocking?

On modern Linux systems, the in-kernel random number generator in /dev/random is considered cryptographically secure and, crucially, no longer blocks.

How random is Dev urandom?

/dev/urandom and /dev/random use the same random number generator. They both are seeded by the same entropy pool. They both will give an equally random number of an arbitrary size. They both can give an infinite amount of random numbers with only a 256 bit seed.

What is entropy in dev random?

A software program called EGD (entropy gathering daemon) is a common alternative for Unix systems that do not support the /dev/random device. It is a user-space daemon, which provides high-quality cryptographic random data.


2 Answers

On most Linux systems, /dev/random is powered from actual entropy gathered by the environment. If your system isn't delivering a large amount of data from /dev/random, it likely means that you're not generating enough environmental randomness to power it.

I'm not sure why you think /dev/urandom is "slower" or higher quality. It reuses an internal entropy pool to generate pseudorandomness - making it slightly lower quality - but it doesn't block. Generally, applications that don't require high-level or long-term cryptography can use /dev/urandom reliably.

Try waiting a little while then reading from /dev/urandom again. It's possible that you've exhausted the internal entropy pool reading so much from /dev/random, breaking both generators - allowing your system to create more entropy should replenish them.

See Wikipedia for more info about /dev/random and /dev/urandom.

like image 115
Tim Avatar answered Oct 12 '22 21:10

Tim


This question is pretty old. But still relevant so I'm going to give my answer. Many CPUs today come with a built-in hardware random number generator (RNG). As well many systems come with a trusted platform module (TPM) that also provide a RNG. There are also other options that can be purchased but chances are your computer already has something.

You can use rngd from rng-utils package on most linux distros to seed more random data. For example on fedora 18 all I had to do to enable seeding from the TPM and the CPU RNG (RDRAND instruction) was:

# systemctl enable rngd # systemctl start rngd 

You can compare speed with and without rngd. It's a good idea to run rngd -v -f from command line. That will show you detected entropy sources. Make sure all necessary modules for supporting your sources are loaded. To use TPM, it needs to be activated through the tpm-tools. update: here is a nice howto.

BTW, I've read on the Internet some concerns about TPM RNG often being broken in different ways, but didn't read anything concrete against the RNGs found in Intel, AMD and VIA chips. Using more than one source would be best if you really care about randomness quality.

urandom is good for most use cases (except sometimes during early boot). Most programs nowadays use urandom instead of random. Even openssl does that. See myths about urandom and comparison of random interfaces.

In recent Fedora and RHEL/CentOS rng-tools also support the jitter entropy. You can on lack of hardware options or if you just trust it more than your hardware.

UPDATE: another option for more entropy is HAVEGED (questioned quality). On virtual machines there is a kvm/qemu VirtIORNG (recommended).

UPDATE 2: In Linux 5.6 kernel does its own jitter entropy.

like image 40
akostadinov Avatar answered Oct 12 '22 23:10

akostadinov