Is there a more efficient way of do this maybe with F expressions? Some how to reducing hitting the DB?
# 1st way hits DB twice per object
def something():
pks = [4, 2, 1, 3, 0]
for i in range(len(pks)):
mymodel.objects.get(pk=pks[i]).update(attr=i)
# 2nd way hits DB once per obj and once for the queryset
def something():
pks = [4, 2, 1, 3, 0]
order = Case(*[When(pk=pk, then=pos) for pos, pk in enumerate(pks)])
query = mymodel.objects.filter(pk__in=pks).order_by(order)
for i in range(len(query)):
query[i].attr = i
query[i].save()
Editing to move variables consistent.
Generally making multiple database queries instead of one large query is to be avoided. However your case seems to be an exception. (Note: I am making this answer based on the information provided. Please don't edit your question to say I actually meant ....)
The following query will be very fast no matter how many records you have because it's retrieving a single item from a primary key. All RDBMS are designed to handle this really well.
mymodel.objects.get(pk=a[i])
Hower you can make your function more efficient like this:
def something():
pks = [4, 2, 1, 3, 0]
for i in range(len(a)):
mymodel.objects.filter(pk=a[i]).update(attr=i)
Now it hits the DB only once per object. the above query just translates to
UPDATE mapp_mymodel SET attr=i where pk=1
If I'm not mistaken, you can do something like
pks = [4, 2, 1, 3, 0]
mymodel.object.filter(pk__in=pks).update(attr='test')
Unless you are trying to use the for loop index as the attribute value, in which this does not work.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With