Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Memory efficient (constant) and speed optimized iteration over a large table in Django

I have a very large table. It's currently in a MySQL database. I use django.

I need to iterate over each element of the table to pre-compute some particular data (maybe if I was better I could do otherwise but that's not the point).

I'd like to keep the iteration as fast as possible with a constant usage of memory.

As it is already clearly in Limiting Memory Use in a *Large* Django QuerySet and Why is iterating through a large Django QuerySet consuming massive amounts of memory?, a simple iteration over all objects in django will kill the machine as it will retrieve ALL objects from the database.

Towards a solution

First of all, to reduce your memory consumption you should be sure DEBUG is False (or monkey patch the cursor: turn off SQL logging while keeping settings.DEBUG?) to be sure django isn't storing stuff in connections for debug.

But even with that,

for model in Model.objects.all()

is a no go.

Not even with the slightly improved form:

for model in Model.objects.all().iterator()

Using iterator() will save you some memory by not storing the result of the cache internally (though not necessarily on PostgreSQL!); but will still retrieve the whole objects from the database, apparently.

A naive solution

The solution in the first question is to slice the results based on a counter by a chunk_size. There are several ways to write it, but basically they all come down to an OFFSET + LIMIT query in SQL.

something like:

qs = Model.objects.all()
counter = 0
count = qs.count()
while counter < count:     
    for model in qs[counter:counter+count].iterator()
        yield model
    counter += chunk_size

While this is memory efficient (constant memory usage proportional to chunk_size), it's really poor in term of speed: as OFFSET grows, both MySQL and PostgreSQL (and likely most DBs) will start choking and slowing down.

A better solution

A better solution is available in this post by Thierry Schellenbach. It filters on the PK, which is way faster than offsetting (how fast probably depends on the DB)

pk = 0
last_pk = qs.order_by('-pk')[0].pk
queryset = qs.order_by('pk')
while pk < last_pk:
    for row in qs.filter(pk__gt=pk)[:chunksize]:
        pk = row.pk
        yield row
    gc.collect()

This is starting to get satisfactory. Now Memory = O(C), and Speed ~= O(N)

Issues with the "better" solution

The better solution only works when the PK is available in the QuerySet. Unluckily, that's not always the case, in particular when the QuerySet contains combinations of distinct (group_by) and/or values (ValueQuerySet).

For that situation the "better solution" cannot be used.

Can we do better?

Now I'm wondering if we can go faster and avoid the issue regarding QuerySets without PK. Maybe using something that I found in other answers, but only in pure SQL: using cursors.

Since I'm quite bad with raw SQL, in particular in Django, here comes the real question:

how can we build a better Django QuerySet Iterator for large tables

My take from what I've read is that we should use server-side cursors (apparently (see references) using a standard Django Cursor would not achieve the same result, because by default both python-MySQL and psycopg connectors cache the results).

Would this really be a faster (and/or more efficient) solution?

Can this be done using raw SQL in django? Or should we write specific python code depending on the database connector?

Server Side cursors in PostgreSQL and in MySQL

That's as far as I could get for the moment...

a Django chunked_iterator()

Now, of course the best would have this method work as queryset.iterator(), rather than iterate(queryset), and be part of django core or at least a pluggable app.

Update Thanks to "T" in the comments for finding a django ticket that carry some additional information. Differences in connector behaviors make it so that probably the best solution would be to create a specific chunked method rather than transparently extending iterator (sounds like a good approach to me). An implementation stub exists, but there hasn't been any work in a year, and it does not look like the author is ready to jump on that yet.

Additional Refs:

  1. Why does MYSQL higher LIMIT offset slow the query down?
  2. How can I speed up a MySQL query with a large offset in the LIMIT clause?
  3. http://explainextended.com/2009/10/23/mysql-order-by-limit-performance-late-row-lookups/
  4. postgresql: offset + limit gets to be very slow
  5. Improving OFFSET performance in PostgreSQL
  6. http://www.depesz.com/2011/05/20/pagination-with-fixed-order/
  7. How to get a row-by-row MySQL ResultSet in python Server Side Cursor in MySQL

Edits:

Django 1.6 is adding persistent database connections

Django Database Persistent Connections

This should facilitate, under some conditions, using cursors. Still it's outside my current skills (and time to learn) how to implement such a solution..

Also, the "better solution" definitely does not work in all situations and cannot be used as a generic approach, only a stub to be adapted case by case...

like image 259
Stefano Avatar asked Jan 03 '13 17:01

Stefano


2 Answers

The essential answer: use raw SQL with server-side cursors.

Sadly, until Django 1.5.2 there is no formal way to create a server-side MySQL cursor (not sure about other database engines). So I wrote some magic code to solve this problem.

For Django 1.5.2 and MySQLdb 1.2.4, the following code will work. Also, it's well commented.

Caution: This is not based on public APIs, so it will probably break in future Django versions.

# This script should be tested under a Django shell, e.g., ./manage.py shell

from types import MethodType

import MySQLdb.cursors
import MySQLdb.connections
from django.db import connection
from django.db.backends.util import CursorDebugWrapper


def close_sscursor(self):
    """An instance method which replace close() method of the old cursor.

    Closing the server-side cursor with the original close() method will be
    quite slow and memory-intensive if the large result set was not exhausted,
    because fetchall() will be called internally to get the remaining records.
    Notice that the close() method is also called when the cursor is garbage 
    collected.

    This method is more efficient on closing the cursor, but if the result set
    is not fully iterated, the next cursor created from the same connection
    won't work properly. You can avoid this by either (1) close the connection 
    before creating a new cursor, (2) iterate the result set before closing 
    the server-side cursor.
    """
    if isinstance(self, CursorDebugWrapper):
        self.cursor.cursor.connection = None
    else:
        # This is for CursorWrapper object
        self.cursor.connection = None


def get_sscursor(connection, cursorclass=MySQLdb.cursors.SSCursor):
    """Get a server-side MySQL cursor."""
    if connection.settings_dict['ENGINE'] != 'django.db.backends.mysql':
        raise NotImplementedError('Only MySQL engine is supported')
    cursor = connection.cursor()
    if isinstance(cursor, CursorDebugWrapper):
        # Get the real MySQLdb.connections.Connection object
        conn = cursor.cursor.cursor.connection
        # Replace the internal client-side cursor with a sever-side cursor
        cursor.cursor.cursor = conn.cursor(cursorclass=cursorclass)
    else:
        # This is for CursorWrapper object
        conn = cursor.cursor.connection
        cursor.cursor = conn.cursor(cursorclass=cursorclass)
    # Replace the old close() method
    cursor.close = MethodType(close_sscursor, cursor)
    return cursor


# Get the server-side cursor
cursor = get_sscursor(connection)

# Run a query with a large result set. Notice that the memory consumption is low.
cursor.execute('SELECT * FROM million_record_table')

# Fetch a single row, fetchmany() rows or iterate it via "for row in cursor:"
cursor.fetchone()

# You can interrupt the iteration at any time. This calls the new close() method,
# so no warning is shown.
cursor.close()

# Connection must be close to let new cursors work properly. see comments of
# close_sscursor().
connection.close()
like image 192
Rockallite Avatar answered Oct 07 '22 02:10

Rockallite


Short Answer

If you just need to iterate over the table itself without doing anything fancy, Django comes with a builtin iterator:

queryset.iterator(chunk_size=1000)

This causes Django to clean up it's own cache to reduce memory use. As of Django 4.1, this will even work with prefetch_related.

If you want to get back pages rather than individual objects to combine with other optimizations such as bulk_update, use this:

def queryset_to_pages(queryset, page_size=1000):
    page = queryset.order_by("pk")[:page_size]
    while page:
        yield page
        pk = max(obj.pk for obj in page)
        page = queryset.filter(pk__gt=pk).order_by("pk")[:page_size]

Performance Profiling

I profiled a number of different approaches on a PostgreSQL table with about 200,000 rows on Django 3.2. For every query, I added up the sum of the ids, both to ensure that Django was actually retrieving the objects and so that I could verify correctness of iteration between queries. All of the timings were taken after several iterations over the table in question to minimize caching advantages of later tests.

Basic Iteration

The basic approach is just iterating over the table. The main issue with this approach is that the amount of memory used is not constant; it grows with the size of the table, and I've seen this run out of memory on larger tables.

x = sum(i.id for i in MyModel.objects.all())

Wall time: 3.53 s, 22MB of memory (BAD)

Django Iterator

The Django iterator (at least as of Django 3.2) fixes the memory issue with minor performance benefit. Presumably this comes from Django spending less time managing cache.

assert sum(i.id for i in MyModel.objects.all().iterator(chunk_size=1000)) == x

Wall time: 3.11 s, <1MB of memory

Custom Iterator

The natural comparison point is attempting to do the paging ourselves by progresively increased queries on the primary key. While this is an improvement over naieve iteration in that it has constant memory, it actually loses to Django's built-in iterator on speed because it makes more database queries.

def queryset_iterator(queryset, page_size=1000):
    page = queryset.order_by("pk")[:page_size]
    while page:
        for obj in page:
            yield obj
            pk = obj.pk
        page = queryset.filter(pk__gt=pk).order_by("pk")[:page_size]

assert sum(i.id for i in queryset_iterator(MyModel.objects.all())) == x

Wall time: 3.65 s, <1MB of memory

Custom Paging Function

The main reason to use the custom iteration is so that you can get the results in pages. This function is very useful to then plug in to bulk-updates while only using constant memory. It's a bit slower than queryset_iterator in my tests and I don't have a coherent theory as to why, but the slowdown isn't substantial.

def queryset_to_pages(queryset, page_size=1000):
    page = queryset.order_by("pk")[:page_size]
    while page:
        yield page
        pk = max(obj.pk for obj in page)
        page = queryset.filter(pk__gt=pk).order_by("pk")[:page_size]

assert sum(i.id for page in queryset_to_pages(MyModel.objects.all()) for i in page) == x

Wall time: 4.49 s, <1MB of memory

Alternative Custom Paging Function

Given that Django's queryset iterator is faster than doing paging ourselves, the queryset pager can be alternately implemented to use it. It's a little bit faster than doing paging ourselves, but the implementation is messier. Readability matters, which is why my personal preference is the previous paging function, but this one can be better if your queryset doesn't have a primary key in the results (for whatever reason).

def queryset_to_pages2(queryset, page_size=1000):
    page = []
    page_count = 0
    for obj in queryset.iterator():
        page.append(obj)
        page_count += 1
        if page_count == page_size:
            yield page
            page = []
            page_count = 0
    yield page

assert sum(i.id for page in queryset_to_pages2(MyModel.objects.all()) for i in page) == x

Wall time: 4.33 s, <1MB of memory


Bad Approaches

The following are approaches you should never use (many of which are suggested in the question) along with why.

Do NOT Use Slicing on an Unordered Queryset

Whatever you do, do NOT slice an unordered queryset. This does not correctly iterate over the table. The reason for this is that the slice operation does a SQL limit + offset query based on your queryset and that django querysets have no order guarantee unless you use order_by. As a result, each time you take a slice, you are getting a random slice of your table, which means your slices will almost certainly be overlapping and won't cover all rows of the table between them.

def very_bad_iterator(queryset, page_size=1000):
    counter = 0
    count = queryset.count()
    while counter < count:     
        for model in queryset[counter:counter+count].iterator():
            yield model
        counter += page_size

assert sum(i.id for i in very_bad_iterator(MyModel.objects.all())) == x

Assertion Error; i.e. INCORRECT RESULT COMPUTED!!!

Do NOT use Slicing for Whole-Table Iteration in General

Even if we order the queryset, list slicing is abysmal from a performance perspective. This is because SQL offset is a linear time operation, which means that a limit + offset paged iteration of a table will be quadratic time, which you absolutely do not want.

def bad_iterator(queryset, page_size=1000):
    counter = 0
    count = queryset.count()
    while counter < count:     
        for model in queryset.order_by("id")[counter:counter+count].iterator():
            yield model
        counter += page_size

assert sum(i.id for i in bad_iterator(MyModel.objects.all())) == x

Wall time: 4min 34s (BAD), <1MB of memory

Do NOT use Django's Paginator for Whole-Table Iteration

Django comes with a built-in Paginator. It may be tempting to think that is appropriate for doing a paged iteration of a database, but it is not. The point of Paginator is for returning a single page of a result to a UI or an API endpoint. It is substantially slower than any of the good apporaches at iterating over a table.

from django.core.paginator import Paginator

def bad_paged_iterator(queryset, page_size=1000):
    p = Paginator(queryset.order_by("pk"), page_size)
    for i in p.page_range:
        yield p.get_page(i)
        
assert sum(i.id for page in bad_paged_iterator(MyModel.objects.all()) for i in page) == x

Wall time: 13.1 s (Not great), <1MB of memory

like image 30
Zags Avatar answered Oct 07 '22 03:10

Zags