Let's say I randomly generate a number and check if it is in a certain range. For integer, it's simple. For example with unsigned 8 bits, the randomly generated number can be in range (0 - 5 inclusive) with the probability of (6/2^8).
My question is how can I calculate the same thing with floating point number. For example, when I just randomly generate 32bits, what is the probability that the number is within a range of -10.0 and 10.0?
Assuming a binary representation, the probability can be computed for ranges [2^n,2^n+1)
For example, if exponent is on 11 bits, probability is 1/2^12 (taking sign into account)
Inside such interval, it's possible to consider a uniform density of floating point.
Then I guess you could try and decompose your interval into such powers of 2 boundaries.
Then compute the probability of each interval, and sum them all...
Assuming there is a IEEE-754-like denormal representation, for the smallest possible exponent e, interval is [0,2^e[
So this should give you a rather simple procedure, but I see no simple formula.
For very accurate probabilities, you'll have to deal with the significand bit pattern of nearest representable float inside the interval for lower and upper bounds.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With