I have a set of objects in a Vector from which I'd like to select a random subset (e.g. 100 items coming back; pick 5 randomly). In my first (very hasty) pass I did an extremely simple and perhaps overly clever solution:
Vector itemsVector = getItems();
Collections.shuffle(itemsVector);
itemsVector.setSize(5);
While this has the advantage of being nice and simple, I suspect it's not going to scale very well, i.e. Collections.shuffle() must be O(n) at least. My less clever alternative is
Vector itemsVector = getItems();
Random rand = new Random(System.currentTimeMillis()); // would make this static to the class
List subsetList = new ArrayList(5);
for (int i = 0; i < 5; i++) {
// be sure to use Vector.remove() or you may get the same item twice
subsetList.add(itemsVector.remove(rand.nextInt(itemsVector.size())));
}
Any suggestions on better ways to draw out a random subset from a Collection?
Jon Bentley discusses this in either 'Programming Pearls' or 'More Programming Pearls'. You need to be careful with your N of M selection process, but I think the code shown works correctly. Rather than randomly shuffle all the items, you can do the random shuffle only shuffling the first N positions - which is a useful saving when N << M.
Knuth also discusses these algorithms - I believe that would be Vol 3 "Sorting and Searching", but my set is packed pending a move of house so I can't formally check that.
@Jonathan,
I believe this is the solution you're talking about:
void genknuth(int m, int n)
{ for (int i = 0; i < n; i++)
/* select m of remaining n-i */
if ((bigrand() % (n-i)) < m) {
cout << i << "\n";
m--;
}
}
It's on page 127 of Programming Pearls by Jon Bentley and is based off of Knuth's implementation.
EDIT: I just saw a further modification on page 129:
void genshuf(int m, int n)
{ int i,j;
int *x = new int[n];
for (i = 0; i < n; i++)
x[i] = i;
for (i = 0; i < m; i++) {
j = randint(i, n-1);
int t = x[i]; x[i] = x[j]; x[j] = t;
}
sort(x, x+m);
for (i = 0; i< m; i++)
cout << x[i] << "\n";
}
This is based on the idea that "...we need shuffle only the first m elements of the array..."
If you're trying to select k distinct elements from a list of n, the methods you gave above will be O(n) or O(kn), because removing an element from a Vector will cause an arraycopy to shift all the elements down.
Since you're asking for the best way, it depends on what you are allowed to do with your input list.
If it's acceptable to modify the input list, as in your examples, then you can simply swap k random elements to the beginning of the list and return them in O(k) time like this:
public static <T> List<T> getRandomSubList(List<T> input, int subsetSize)
{
Random r = new Random();
int inputSize = input.size();
for (int i = 0; i < subsetSize; i++)
{
int indexToSwap = i + r.nextInt(inputSize - i);
T temp = input.get(i);
input.set(i, input.get(indexToSwap));
input.set(indexToSwap, temp);
}
return input.subList(0, subsetSize);
}
If the list must end up in the same state it began, you can keep track of the positions you swapped, and then return the list to its original state after copying your selected sublist. This is still an O(k) solution.
If, however, you cannot modify the input list at all and k is much less than n (like 5 from 100), it would be much better not to remove selected elements each time, but simply select each element, and if you ever get a duplicate, toss it out and reselect. This will give you O(kn / (n-k)) which is still close to O(k) when n dominates k. (For example, if k is less than n / 2, then it reduces to O(k)).
If k not dominated by n, and you cannot modify the list, you might as well copy your original list, and use your first solution, because O(n) will be just as good as O(k).
As others have noted, if you are depending on strong randomness where every sublist is possible (and unbiased), you'll definitely need something stronger than java.util.Random
. See java.security.SecureRandom
.
I wrote an efficient implementation of this a few weeks back. It's in C# but the translation to Java is trivial (essentially the same code). The plus side is that it's also completely unbiased (which some of the existing answers aren't) - a way to test that is here.
It's based on a Durstenfeld implementation of the Fisher-Yates shuffle.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With