I want to securely generate a random number in the range [0, N), where N is a parameter. However, System.Security.Cryptography.RandomNumberGenerator only provides a GetBytes() method to fill an array with random values.
(I need the random integers for nonces used in a slightly modified version of SRP. The "slightly modified" part is out of my control, and the only reason I'm even touching crypto stuff.)
I have written a method to do this, but I'm looking for a better way or at least confirmation that I'm doing it right.
using System.Numerics
///<summary>Generates a uniformly random integer in the range [0, bound).</summary>
public static BigInteger RandomIntegerBelow(this System.Security.Cryptography.RandomNumberGenerator source, BigInteger bound) {
Contract.Requires<ArgumentException>(source != null);
Contract.Requires<ArgumentException>(bound > 0);
Contract.Ensures(Contract.Result<BigInteger>() >= 0);
Contract.Ensures(Contract.Result<BigInteger>() < bound);
//Get a byte buffer capable of holding any value below the bound
var buffer = (bound << 16).ToByteArray(); // << 16 adds two bytes, which decrease the chance of a retry later on
//Compute where the last partial fragment starts, in order to retry if we end up in it
var generatedValueBound = BigInteger.One << (buffer.Length * 8 - 1); //-1 accounts for the sign bit
Contract.Assert(generatedValueBound >= bound);
var validityBound = generatedValueBound - generatedValueBound % bound;
Contract.Assert(validityBound >= bound);
while (true) {
//generate a uniformly random value in [0, 2^(buffer.Length * 8 - 1))
source.GetBytes(buffer);
buffer[buffer.Length - 1] &= 0x7F; //force sign bit to positive
var r = new BigInteger(buffer);
//return unless in the partial fragment
if (r >= validityBound) continue;
return r % bound;
}
}
Your code looks correct and unbiased. However, you may want to change it a bit, if you are after performance, and depending on the speed of the random source you are using. The idea is to mask out a few more bits so that the random value r
is smaller than 2*bound
. If the length in bits of bound is x
(length excluding the sign bit), then you generate a buffer of n = ((x+8)/8)
bytes, and mask out the upper (n*8-x)
bits. In C#, this should look like:
var x = BitLength(bound);
var n = ((x + 8) / 8);
var buffer = new Byte[n];
var mask = 0xFF >> (8 * n - x);
while (true) {
source.GetBytes(buffer);
buffer[n - 1] &= mask;
var r = new BigInteger(buffer);
if (r < bound)
return r;
}
With this kind of code, you may have to ask more random bytes from the source, but you avoid the modular reduction (the %
operator). A proper PRNG should be much faster than division on big integers, so this should be a better trade-off -- but this really depends on the performance of the random source, and, since it is a question of performance, it cannot be fully answered without trying. Chances are that, as part of an overall SRP implementation, it would not make any noticeable difference anyway.
I have used above a BitLength()
function which does not seem to exist in C# (Java's BigInteger
class has a bitLength()
method, but apparently Microsoft forgot to include one in their big integer implementation -- which is a shame, because it seems that the implementation really includes a private field called _bits
which maintains that value). The bit length can be computed efficiently from the representation, in bytes, of the bound
value. Thus, the code would become something like this:
var buffer = bound.ToByteArray();
var n = buffer.length;
var msb = buffer[n - 1];
var mask = 0;
while (mask < msb)
mask = (mask << 1) + 1;
while (true) {
source.GetBytes(buffer);
buffer[n - 1] &= mask;
var r = new BigInteger(buffer);
if (r < bound)
return r;
}
I am using the fact that the length, in bytes, of the encoding of bound
, as returned by ToByteArray()
, is precisely the value of n
that I need.
That implementation looks correct for generating unbiased integers in a range.
Another approach would be to take the random bytes as the infinite digits in the binary fraction 0.xxxxxx...
, multiply that by bound
and round down. It wouldn't actually take infinite digits, just as many as needed to determine what the rounded-down result is. I think that's more complex though.
But generating numbers in the full range may be unnecessary for your protocol. RFC 5054 section 3.1 requires only 256-bit private exponents. That number comes from doubling the bit-length of the hash output and requires that a safe prime is used. An earlier draft of that RFC referred to On Diffie-Hellman key agreement with short exponents for an explanation, which is in the first paragraph of section 4 there.
If you prefer more random bits, take the bit length of range
minus 1. This is what OpenSSL does for Diffie-Hellman (search for BN_rand
and the line before it). That's still easier to generate and less that twice shorter than the full range, and if that makes a difference you should use a larger prime.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With