Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What are efficient data structures and algorithms for simulating loaded dice?

People also ask

Which is the most efficient data structure?

Arrays. An array is a linear data structure that holds an ordered collection of values. It's the most efficient in storing and accessing a sequence of objects.

What is dice algorithm?

The algorithm, called the Fast Loaded Dice Roller (FLDR) is a computer program that simulates the roll of dice to produce random integers. The dice can have any number of sides, and they are “loaded,” or weighted, to make some sides more likely to come up than others.

Are computer dice random?

Since computers can not generate truly random numbers, no die roller is truly random. The pseudorandom numbers, however can be sufficiently random enough to be more random than real dice. A recent study found that most commercial d6 are biased significantly.


You are looking for the alias method which provides a O(1) method for generating a fixed discrete probability distribution (assuming you can access entries in an array of length n in constant time) with a one-time O(n) set-up. You can find it documented in chapter 3 (PDF) of "Non-Uniform Random Variate Generation" by Luc Devroye.

The idea is to take your array of probabilities pk and produce three new n-element arrays, qk, ak, and bk. Each qk is a probability between 0 and 1, and each ak and bk is an integer between 1 and n.

We generate random numbers between 1 and n by generating two random numbers, r and s, between 0 and 1. Let i = floor(r*N)+1. If qi < s then return ai else return bi. The work in the alias method is in figuring out how to produce qk, ak and bk.


Use a balanced binary search tree (or binary search in an array) and get O(log n) complexity. Have one node for each die result and have the keys be the interval that will trigger that result.

function get_result(node, seed):
    if seed < node.interval.start:
        return get_result(node.left_child, seed)
    else if seed < node.interval.end:
        // start <= seed < end
        return node.result
    else:
        return get_result(node.right_child, seed)

The good thing about this solution is that is very simple to implement but still has good complexity.


I'm thinking of granulating your table.

Instead of having a table with the cumulative for each die value, you could create an integer array of length xN, where x is ideally a high number to increase accuracy of the probability.

Populate this array using the index (normalized by xN) as the cumulative value and, in each 'slot' in the array, store the would-be dice roll if this index comes up.

Maybe I could explain easier with an example:

Using three dice: P(1) = 0.2, P(2) = 0.5, P(3) = 0.3

Create an array, in this case I will choose a simple length, say 10. (that is, x = 3.33333)

arr[0] = 1,
arr[1] = 1,
arr[2] = 2,
arr[3] = 2,
arr[4] = 2,
arr[5] = 2,
arr[6] = 2,
arr[7] = 3,
arr[8] = 3,
arr[9] = 3

Then to get the probability, just randomize a number between 0 and 10 and simply access that index.

This method might loose accuracy, but increase x and accuracy will be sufficient.


There are many ways to generate a random integer with a custom distribution (also known as a discrete distribution). The choice depends on many things, including the number of integers to choose from, the shape of the distribution, and whether the distribution will change over time.

One of the simplest ways to choose an integer with a custom weight function f(x) is the rejection sampling method. The following assumes that the highest possible value of f is max. The time complexity for rejection sampling is constant on average, but depends greatly on the shape of the distribution and has a worst case of running forever. To choose an integer in [1, k] using rejection sampling:

  1. Choose a uniform random integer i in [1, k].
  2. With probability f(i)/max, return i. Otherwise, go to step 1.

Other algorithms have an average sampling time that doesn't depend so greatly on the distribution (usually either constant or logarithmic), but often require you to precalculate the weights in a setup step and store them in a data structure. Some of them are also economical in terms of the number of random bits they use on average. Many of these algorithms were introduced after 2011, and they include—

  • The Bringmann–Larsen succinct data structure ("Succinct Sampling from Discrete Distributions", 2012),
  • Yunpeng Tang's multi-level search ("An Empirical Study of Random Sampling Methods for Changing Discrete Distributions", 2019), and
  • the Fast Loaded Dice Roller (2020).

Other algorithms include the alias method (already mentioned in your article), the Knuth–Yao algorithm, the MVN data structure, and more. See my section "Weighted Choice With Replacement" for a survey.