uniform_int_distribution
has the interval [a, b]
but uniform_real_distribution
has the interval [a, b)
. A naive approach is to do something like b + 0.1
, but then you start to get into infinitesimals...luckily the correct approach is simple:
std::uniform_real_distribution<> dis(start, std::nextafter(stop, DBL_MAX));
But why is this necessary? More specifically, what's the rationale that these two are different?
The uniform real distribution over [a, b)
is almost statistically indistinguishable from the distribution [a, b]
.
The statistical distance between these two distibutions is, very nearly, one divided by the number of floating point numbers between a
and b
.
That is to say, there is no statistical test which returns either 0
or 1
for any given sample, such that the probability of observing a 1
with the first distribution differs from the probability of observing a 1
with the second distribution, by more than 2^{-32}
. (Assuming that you are driving e.g. std::uniform_real_distribution<float>
using pure entropy from std::random_device
.)
Thus in most real applications there is no meaningful difference.
If you want to do something like, have a range (a, b]
instead in order to prevent a (highly unlikely) divide by zero error or something, you can test if you got exactly a
and replace it with b
if it is some critical system of some sort.
Also FWIW I suspect that your solution of "next after b" will not always do what you think -- most of the time, what is going to happen is, the adaptor is going to compute b - a
as a floating point number, then take a sample from the entropy source, convert it to a floating point number in the range 0-1
with a static cast, then multiply by the factor b-a
, add a
and return the result.
An epsilon change in b
is likely to get lost in the floating point subtraction, if a
and b
are not of exactly the same scale, and then the change would have no effect.
The reason that this is not a bug is that the standard does not require that the distribution adaptor produces exactly the specified distribution, only that it converges to the target in an approximate sense.
The standard makes a hard guarantee that the distribution won't ever generate something outside it's stated range:
26.5.8.1 In General [rand.dist.general]
1. Each type instantiated from a class template specified in this section 26.5.8 satisfies the requirements of a random number distribution (26.5.1.6) type.
...
3. The algorithms for producing each of the specified distributions are implementation-defined.
4. The value of each probability density functionp(z)
and of each discrete probability functionP (z i )
specified in this section is0
everywhere outside its stated domain.
But, the standard also says this about uniform random number generators:
26.5.3.1 [rand.req.urng]
1. A uniform random number generatorg
of typeG
is a function object returning unsigned integer values such that each value in the range of possible results has (ideally) equal probability of being returned. [ Note: The degree to whichg
’s results approximate the ideal is often determined statistically. — end note ]
Since [a, b)
and [a, b + epsilon)
are basically statistically indistinguishable, you shouldn't consider it to be a bug if even if with your b + epsilon
trick you still never see a b
in the output, even if you exhaustively try every possible seed.
If this were not the standpoint of the standard, then all of these adaptors would have to be painstakingly written so that they can always get exactly the right distribution on every architecture, and they would have to run much slower. In most applications minute sampling inaccuracies like this are tolerable and it's more important to be efficient.
Let me offer one rationale: When using integers, you sometimes use them to implement a choice. So you have N values, each of which must be equally likely. Sometimes, the highest possible value could even be the highest value representable in that integer type, so N+1 would be out of range and overflow.
For floating point values, you often can't represent the limits of the range anyway (e.g. 0.1
) and since the numbers are not compared for equality usually (e.g. 1.0/10.0
may or may not compare equal to 0.1
), you ignore the fact that the limiting value doesn't occur.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With