Does anyone know of a data structure that supports the two operations efficiently?
This is sort of like the canonical "bag of marbles" that always comes up in introductory probability classes. You can put arbitrary marbles into the bag, and can then efficiently remove the marbles at random.
The best solution I have is as follows - use a self-balancing binary search tree (AVL, AA, red/black, etc.) to store the elements. This gives O(lg n) insertion time. To remove a random element, pick a random index i, then locate and remove the ith element from the tree. If you've augmented the structure appropriately, this can be made to run in O(lg n) time as well.
This runtime certainly isn't bad, but I'm curious if it's possible to do better - perhaps O(1) for insertion and O(lg n) for dequeues? Or perhaps something that runs in expected O(1) insert and delete using hash functions? Or perhaps a stronger amortized bound?
Does anyone have any ideas on how to make this asymptotically faster?
An array is a random access data structure, where each element can be accessed directly and in constant time.
Using random. randrange() to select random value from a list. random. randrange() method is used to generate a random number in a given range, we can specify the range to be 0 to the length of the list, and get the index, and then the corresponding value.
There are different data structures based on hashing, but the most commonly used data structure is the hash table. Hash tables are generally implemented using arrays.
Yes. Use a vector.
To insert, simply place at the end, and increment the size. To remove, pick an element at random, swap its contents with the end value, then pop off the end value (i.e., return the end value and decrement the vector's size).
Both operations are amortised O(1).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With