I am implementing a raytracer, and I am in the middle of implementing samplers. A sampler is a generator of random points over a square x=0:1, y=0:1. Each sampler holds multiple sets of "random" samples, and each set contains a given number of samples.
Now, one of the Samplers is the NRooks. It divides the surface in n x n
blocks, chooses the blocks along the diagonal, in each diagonal block extracts a random point, and finally shuffles first the x
among themselves, then the y
.
This is all nice and clean. However, when it's time to extract the points, the book I'm following proposes these additional requirements to break correlations between subsequent pixels and samplings. The first requirement is that every time a set is exhausted, a new sample set is picked at random. The code implemented to achieve this is the following:
Point2D Sampler::sample_unit_square(void) {
if (count % num_samples == 0) jump = (rand_int() % num_sets) * num_samples;
return (samples[jump + count++ % num_samples]
}
where samples
is a vector of Point2D of size num_samples*num_sets
(it is linearized). Every time one pixel is done (count is divisible by num_samples) a new jump is extracted and used to indicize the linear array for the start of a new set.
Since I'm using python, my strategy makes use of iterators:
def __iter__(self):
while True:
for sample_set in random.choice(self._samples_sets):
for sample in sample_set:
yield sample
This is trivial, and works ok.
The second need is to shuffle the indices, and here is where my question is. The book revises the code as follows
Point2D Sampler::sample_unit_square(void) {
if (count % num_samples == 0) jump = (rand_int() % num_sets) * num_samples;
return (samples[jump + shuffled_indices[ jump + count++ % num_samples]]
}
where shuffled indices is an array computed as follows
void Sampler::setup_shuffled_indices(void) {
shuffled_indices.reserve(num_samples*num_sets);
vector<int> indices;
for (int j=0; j<num_samples; j++) indices.push_back(j);
for (int p=0; p<num_sets; p++) {
random_shuffle(indices.begin(), indices.end());
for (int j=0; j<num_samples; j++) {
shuffled_indices.push_back(indices[j]);
}
}
}
which is a very C++ way of taking a list of numbers from 1 to n and shuffling them. I wanted to implement the following code in python
def __iter__(self):
while True:
sample_set = random.choice(self._samples_sets):
shuffled_set = sample_set[:]
random.shuffle(shuffled_set)
for sample in shuffled_set:
yield sample
I could also implement a random iterator that iterates on the set, saving the list copy, but this is not the point. My question arises from the following phrase in the book:
... Another possibility [to remove correlation] is to use a final shuffle on the samples of each set, but this destroys the n-rooks condition [...]. The best way is to randomly shuffle the indices used in
sample_unit_square
, for each set, but guarantee that all samples are used.
What I don't understand is: why does it say that the final shuffle on the samples of each set breaks the n-rooks ? the point is that he is using indirect indexing into the array of points. This indirect index is created out of a shuffling of all the indices from 1 to the number of sets, but this is equivalent to a shuffle on all the samples in each set. Being IMHO equivalent, I don't see why the first formulation should break the n-rooks and why the second wouldn't.
The book, for the record, is "Ray tracing from the ground up", by Kevin Suffern.
It looks to me like
...use a final shuffle on the samples of each set..
is suggesting to shuffle each set independently after the sets are shuffled.
def __iter__(self):
while True:
for sample_set in random.choice(self._samples_sets):
for sample in random.choice(sample_set):
yield sample
Like so. I'm not a Python expert so forgive any code error. This would break the n-rooks, though it may simply be a bad idea. It depends on what you're going for.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With