myqueryset = Content.objects.filter(random 100)
The idea is to use Django's annotate (which is basically running group_by ) to find all the instances that have more than one row with the same my_id and process them as Ned suggests. Then for the remainder (which have no duplicates), you can just grab the individual rows.
Content.objects.all().order_by('?')[:100]
See the order_by docs. Also be aware this approach does not scale well (in fact, it scales really, really badly). See this SO answer for a better way to handle random selection when you have large amounts of data.
If you're going to do this more than once, you need to design this into your database.
If you're doing it once, you can afford to pay the hefty penalty. This gets you exactly 100 with really good random properties. However, it uses a lot of memory.
pool= list( Content.objects.all() ) random.shuffle( pool ) object_list = pool[:100]
Here's another algorithm that's also kind of slow since it may search the entire table. It doesn't use very much memory at all and it may not get exactly 100.
total_count= Content.objects.count() fraction = 100./total_count object_list = [ c for c in Content.objects.all() if random.random() < fraction ]
If you want to do this more than once, you need to add an attribute to Content to allow effective filtering for "random" values. For example, you might do this.
class Content( models.Model ): ... etc. ... def subset( self ): return self.id % 32768
This will partition your data into 32768 distinct subsets. Each subset is 1/32768'th of your data. To get 100 random items, you need 100*32768/total_count subsets of your data.
total_count = Content.objects.count() no_of_subsets= 100*32768/total_count object_list = Content.objects.filter( subset__lte=no_of_subsets )
This is fast and it's reproducible. The subsets are "arbitrary" not technically "random".
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With