I want to generate N random numbers drawn from a specif distribution (e.g uniform random) between [a,b] which sum to a constant C. I have tried a couple of solutions I could think of myself, and some proposed on similar threads but most of them either work for a limited form of problem or I can't prove the outcome still follows the desired distribution.
What I have tried: Generage N random numbers, divide all of them by the sum of them and multiply by the desired constant. This seems to work but the result does not follow the rule that the numbers should be within [a:b].
Generage N-1 random numbers add 0 and desired constant C and sort them. Then calculate the difference between each two consecutive nubmers and the differences are the result. This again sums to C but have the same problem of last method(the range can be bigger than [a:b].
I also tried to generate random numbers and always keep track of min and max in a way that the desired sum and range are kept and come up with this code:
bool generate(function<int(int,int)> randomGenerator,int min,int max,int len,int sum,std::vector<int> &output){
/**
* Not possible to produce such a sequence
*/
if(min*len > sum)
return false;
if(max*len < sum)
return false;
int curSum = 0;
int left = sum - curSum;
int leftIndexes = len-1;
int curMax = left - leftIndexes*min;
int curMin = left - leftIndexes*max;
for(int i=0;i<len;i++){
int num = randomGenerator((curMin< min)?min:curMin,(curMax>max)?max:curMax);
output.push_back(num);
curSum += num;
left = sum - curSum;
leftIndexes--;
curMax = left - leftIndexes*min;
curMin = left - leftIndexes*max;
}
return true;
}
This seems to work but the results are sometimes very skewed and I don't think it's following the original distribution (e.g. uniform). E.g:
//10 numbers within [1:10] which sum to 50:
generate(uniform,1,10,10,50,output);
//result:
2,7,2,5,2,10,5,8,4,5 => sum=50
//This looks reasonable for uniform, but let's change to
//10 numbers within [1:25] which sum to 50:
generate(uniform,1,25,10,50,output);
//result:
24,12,6,2,1,1,1,1,1,1 => sum= 50
Notice how many ones exist in the output. This might sound reasonable because the range is larger. But they really don't look like a uniform distribution. I am not sure even if it is possible to achieve what I want, maybe the constraints are making the problem not solvable.
You could use the RAND() function to generate N numbers (8 in your case) in column A. Then, in column B you could use the following formula B1=A1/SUM(A:A)*320 , B2=A2/SUM(A:A)*320 and so on (where 320 is the sum that you are interested into). So you can just enter =RAND() in A1, then drag it down to A8.
Generate a random integer within a specified rangeYou can also generate a random number greater than 1 in Excel by using the RANDBETWEEN function. This function allows you to input the upper and lower limits of your intended range, then generates a random integer between the values you indicate.
In case you want the sample to follow a uniform distribution, the problem reduces to generate N random numbers with sum = 1. This, in turn, is a special case of the Dirichlet distribution but can also be computed more easily using the Exponential distribution. Here is how:
The p1..pN are uniformly distributed (in the simplex of dim N-1) and their sum is 1.
You can now multiply these pi by the constant C you want and translate them by summing some other constant A like this
qi := A + pi*C.
EDIT 3
In order to address some issues raised in the comments, let me add the following:
EDIT 2
One more issue has been raised in the comments:
Why rescaling a uniform sample does not suffice?
In other words, why should I bother to take negative logarithms?
The reason is that if we just rescale then the resulting sample won't distribute uniformly across the segment (0,1) (or [a,b] for the final sample.)
To visualize this let's think 2D, i.e., let's consider the case N=2. A uniform sample (v1,v2) corresponds to a random point in the square with origin (0,0) and corner (1,1). Now, when we normalize such a point dividing it by the sum s=v1+v2 what we are doing is projecting the point onto the diagonal as shown in the picture (keep in mind that the diagonal is the line x + y = 1):
But given that green lines, which are closer to the principal diagonal from (0,0) to (1,1), are longer than orange ones, which are closer to the axes x and y, the projections tend to accumulate more around the center of the projection line (in blue), where the scaled sample lives. This shows that a simple scaling won't produce a uniform sample on the depicted diagonal. On the other hand, it can be proven mathematically that the negative logarithms do produce the desired uniformity. So, instead of copypasting a mathematical proof I would invite everyone to implement both algorithms and check that the resulting plots behave as this answer describes.
(Note: here is a blog post on this interesting subject with an application to the Oil & Gas industry)
Let's try to simplify the problem. By substracting the lower bound, we can reduce it to finding N numbers in [0,b-a] such that their sum is C-Na.
Renaming the parameters, we can look for N numbers in [0,m] whose sum is S.
Now the problem is akin to partitioning a segment of length S in N distinct sub-segments of length [0,m].
I think the problem is simply not solvable.
if S=1, N=1000 and m anything above 0, the only possible repartition is one 1 and 999 zeroes, which is nothing like a random spread.
There is a correlation between N, m and S, and even picking random values will not make it disappear.
For the most uniform repartition, the length of the sub-segments will follow a gaussian curve with a mean value of S/N.
If you tweak your random numbers differently, you will end up with whatever bias, but in the end you will never have both a uniform [a,b] repartition and a total length of C, unless the length of your [a,b] interval happens to be 2C/N-a.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With