Even though after following many articles on SecureRandom
, I encountered a doubt with the usage of SecureRandom
Security API in Java. In the below example .
public class SecureRandomNumber {
public static void main(String[] args) throws NoSuchAlgorithmException {
TreeSet<Integer> secure = new TreeSet<Integer>();
TreeSet<Integer> unSecure = new TreeSet<Integer>();
SecureRandom sr = new SecureRandom();
byte[] sbuf = sr.generateSeed(8);
ByteBuffer bb = ByteBuffer.wrap(sbuf);
long d = bb.getLong();
sr.setSeed(d);
Random r = new Random();
r.setSeed(System.nanoTime());
for (int k = 0; k < 99999; k++) {
int i = sr.nextInt();
if (!secure.add(i)) {
System.out.println("Repeated Secure Random Number");
} else {
// System.out.println("************Unique***********");
}
int j = r.nextInt();
if (!unSecure.add(j)) {
System.out.println("Repeated UnSecure Random Number");
}
}
}
}
When I run this program I do not find any additional benefit of using a SecureRandom
as it almost gives the same result.
Can anyone let me know if I am doing the right thing here?
Every instance of SecureRandom is created with an initial seed. It works as a base for providing random values and changes every time we generate a new value. Using the new operator or calling SecureRandom. getInstance() will get the default seed from /dev/urandom.
Unfortunately, SecureRandom can be very slow. If it uses /dev/random on Linux, it can block waiting for sufficient entropy to build up.
public SecureRandom(byte[] seed) Constructs a secure random number generator (RNG) implementing the default random number algorithm. The SecureRandom instance is seeded with the specified seed bytes. This constructor traverses the list of registered security Providers, starting with the most preferred Provider.
No, a SecureRandom instance does not guarantee unique results. If it did guarantee that, it wouldn't be entirely random, as you would know that you couldn't get a result that you already received.
You are victim to a common misbelief about random numbers in general: a random sequence doesn't mean that a number cannot be repeated in that sequence. Quite on the contrary, it has to with a high probability. That misbelief is actually used to tell a "random" sequence generated by humans from a real one. A "random" sequence of 0's and 1's generated by a human will probably look like this:
0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, ....
while a real random sequence is not shy of repeating the same number more than twice :) A good example is that statistical tests also look for repetition.
It's also a common misbelief that cryptographically secure random numbers would somehow yield "much more random" values. Their statistical probabilities will probably be pretty much alike and both will perform really well in those standard statistical tests.
So it actually depends on what you want to do whether your choice should be a PRNG or a cryptographically secure PRNG (CSPRNG). "Normal" PRNGs are perfectly fine for simulation purposes such as Monte-Carlo methods etc. The additional benefit of a CSPRNG will give you is that of non-predictability. Because the CSPRNG can "do more" chances are high that its performance will also be worse than that of a vanilla PRNG.
It can be shown that the concept of a "secure" PRNG is tightly coupled with the ability to predict the next bit of its output. For a CSPRNG, predicting the next bit of its output at any time is computationally infeasible. This only holds if you treat its seed value as a secret, of course. Once anyone finds out the seed, the whole thing becomes easily predictable - just recompute the values already generated by the CSPRNG's algorithm and then compute the next value. It can further be shown that being immune to "next-bit prediction" actually implies that there's no statistical test whatsoever that could distinguish the distribution of the CSPRNG from that of a real random uniform distribution. So there's another difference between PRNG and CSPRNG: While a good PRNG will perform well in many statistical tests, a CSPRNG is guaranteed to perform well in all tests.
The rule of thumb where to use which is that
Secure and insecure algorithms will frequently give almost the same result. You can't detect a security flaw in the output. A door with an unpickable lock and a door with a lock that can trivially be picked look pretty much the same and neither will open if you just turn the handle. This is one of the reasons that writing secure code and handling things like encryption and authentication is an area of programming with specialized techniques for design, development, and particularly testing.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With