To give some context, I have been writing a basic Perlin noise implementation in Java, and when it came to implementing seeding, I had encountered a bug that I couldn't explain.
In order to generate the same random weight vectors each time for the same seed no matter which set of coordinates' noise level is queried and in what order, I generated a new seed (newSeed
), based on a combination of the original seed and the coordinates of the weight vector, and used this as the seed for the randomization of the weight vector by running:
rnd.setSeed(newSeed); weight = new NVector(2); weight.setElement(0, rnd.nextDouble() * 2 - 1); weight.setElement(1, rnd.nextDouble() * 2 - 1); weight.normalize()
Where NVector
is a self-made class for vector mathematics.
However, when run, the program generated very bad noise:
After some digging, I found that the first element of each vector was very similar (and so the first nextDouble()
call after each setSeed()
call) resulting in the first element of every vector in the vector grid being similar.
This can be proved by running:
long seed = Long.valueOf(args[0]); int loops = Integer.valueOf(args[1]); double avgFirst = 0.0, avgSecond = 0.0, avgThird = 0.0; double lastfirst = 0.0, lastSecond = 0.0, lastThird = 0.0; for(int i = 0; i<loops; i++) { ran.setSeed(seed + i); double first = ran.nextDouble(); double second = ran.nextDouble(); double third = ran.nextDouble(); avgFirst += Math.abs(first - lastfirst); avgSecond += Math.abs(second - lastSecond); avgThird += Math.abs(third - lastThird); lastfirst = first; lastSecond = second; lastThird = third; } System.out.println("Average first difference.: " + avgFirst/loops); System.out.println("Average second Difference: " + avgSecond/loops); System.out.println("Average third Difference.: " + avgSecond/loops);
Which finds the average difference between the first, second and third random numbers generated after a setSeed()
method has been called over a range of seeds as specified by the program's arguments; which for me returned these results:
C:\java Test 462454356345 10000 Average first difference.: 7.44638117976783E-4 Average second Difference: 0.34131692827329957 Average third Difference.: 0.34131692827329957 C:\java Test 46245445 10000 Average first difference.: 0.0017196011123287126 Average second Difference: 0.3416750057190849 Average third Difference.: 0.3416750057190849 C:\java Test 1 10000 Average first difference.: 0.0021601598225344998 Average second Difference: 0.3409914232342002 Average third Difference.: 0.3409914232342002
Here you can see that the first average difference is significantly smaller than the rest, and seemingly decreasing with higher seeds.
As such, by adding a simple dummy call to nextDouble()
before setting the weight vector, I was able to fix my perlin noise implementation:
rnd.setSeed(newSeed); rnd.nextDouble(); weight.setElement(0, rnd.nextDouble() * 2 - 1); weight.setElement(1, rnd.nextDouble() * 2 - 1);
Resulting in:
I would like to know why this bad variation in the first call to nextDouble()
(I have not checked other types of randomness) occurs and/or to alert people to this issue.
Of course, it could just be an implementation error on my behalf, which I would be greatful if it were pointed out to me.
random() method in the Math class which returns a random floating point number (double) between 0 and 1. To generate random integer numbers between 1 and 30 inclusive: int number = (int) (Math. random() * 30 + 1);
Random number generators are really only pseudo-random. That is, they use deterministic means to generate sequences that appear random given certain statistical criteria. The Random(long seed) constuctor allows you to pass in a seed that determines the sequence of pseudo-random numbers.
random() is based on java. util. Random , which is based on a linear congruential generator. That means its randomness is not perfect, but good enough for most tasks, and it sounds like it should be sufficient for your task.
The Random
class is designed to be a low overhead source of pseudo-random numbers. But the consequence of the "low overhead" implementation is that the number stream has properties that are a long way off perfect ... from a statistical perspective. You have encountered one of the imperfections. Random
is documented as being a Linear Congruential generator, and the properties of such generators are well known.
There are a variety of ways of dealing with this. For example, if you are careful you can hide some of the most obvious "poor" characteristics. (But you would be advised to run some statistical tests. You can't see non-randomness in the noise added to your second image, but it could still be there.)
Alternatively, if you want pseudo-random numbers that have guaranteed good statistical properties, then you should be using SecureRandom
instead of Random
. It has significantly higher overheads, but you can be assured that many "smart people" will have spent a lot of time on the design, testing and analysis of the algorithms.
Finally, it is relatively simple to create a subclass of Random
that uses an alternative algorithm for generating the numbers; see link. The problem is that you have to select (or design) and implement an appropriate algorithm.
Calling this an "issue" is debatable. It is a well known and understood property of LCGs, and use of LCGs was a concious engineering choice. People want low overhead PRNGs, but low overhead PRNGs have poor properties. TANSTAAFL.
Certainly, this is not something that Oracle would contemplate changing in Random
. Indeed, the reasons for not changing are stated clearly in the javadoc for the Random
class.
"In order to guarantee this property, particular algorithms are specified for the class
Random
. Java implementations must use all the algorithms shown here for the classRandom
, for the sake of absolute portability of Java code."
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With