Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Reducing the number of calls to MongoDB with mongoengine

I'm working to optimize a Django application that's (mainly) backed by MongoDB. It's dying under load testing. On the current problematic page, New Relic shows over 700 calls to pymongo.collection:Collection.find. Much of the code was written by junior coders and normally I would look for places to add indicies, make smarter joins and remove loops to reduce query calls, but joins aren't an option here. What I have done (after adding indicies based on EXPLAINs) is tried to reduce the cost in loops by making a general query and then filtering that smaller set in the loops*. While I've gotten the number down from 900 queries, 700 still seems insane even with the intense amount of work being done on the page. I thought perhaps find was called even when filtering an existing queryset, but the code suggests it's always a database query.

I've added some logging to mongoengine to see where the queries come from and to look at EXPLAIN statements, but I'm not having a ton of luck sifting through the wall of info. mongoengine itself seems to be part of the performance problem: I switched to mongomallard as a test and got a 50% performance improvement on the page. Unfortunately, I got errors on a bunch of other pages (as best I can tell it appears Mallard doesn't do well when filtering an existing queryset; the error complains about a call to deepcopy that's happening in a generator, which you can't do-- I hit a brick wall there). While Mallard doesn't seem like a workable replacement for us, it does suggest a lot of the proessing time is spent converting objects to and from Python in mongoengine.

What can I do to further reduce the calls? Or am I focusing on the wrong thing and should be attacking the problem somewhere else?

EDIT: providing some code/ models

The page in question displays the syllabus for a course, showing all the modules in the course, their lessons and the concepts under the lessons. For each concept, the user's progress in the concept is also shown. So there's a lot of looping to get the hierarchy teased out (and it's not stored according to any of the patterns the Mongo docs suggest).

class CourseVersion(Document):
    ...
    course_instances = ListField(ReferenceField('CourseInstance'))
    courseware_containers = ListField(EmbeddedDocumentField('CoursewareContainer'))

class CoursewareContainer(EmbeddedDocument):
    id = UUIDField(required=True, binary=False, default=uuid.uuid4)
    ....
    courseware_containers = ListField(EmbeddedDocumentField('self'))
    teaching_element_instances = ListField(StringField())

The course's modules, lessons and concepts are stored in courseware_containers; we need to get all of the concepts so we can get the list of ids in teaching_element_instances to find the most recent one the user has worked on (if any) for that concept and then look up their progress.

* Just to be clear, I am using a profiler and looking at times and doings things The Right Way as best I know, not simply changing things and hoping for the best.

like image 666
Tom Avatar asked May 23 '14 11:05

Tom


1 Answers

The code sample isn't bad per-sae but there are a number of areas that should be considered and may help improve performance.

class CourseVersion(Document):
    ...
    course_instances = ListField(ReferenceField('CourseInstance'))
    courseware_containers = ListField(EmbeddedDocumentField('CoursewareContainer'))

class CoursewareContainer(EmbeddedDocument):
    id = UUIDField(required=True, binary=False, default=uuid.uuid4)
    ....
    courseware_containers = ListField(EmbeddedDocumentField('self'))
    teaching_element_instances = ListField(StringField())

Review

  1. Unbounded lists.
    course_instances, courseware_containers, teaching_element_instances

    If these fields are unbounded and continuously grow then the document will move on disk as it grows, causing disk contention on heavily loaded systems. There are two patterns to help minimise this:

    a) Turn on Power of two sizes. This will cost disk space but should lower the amount of io churn as the document grows

    b) Initial Padding - custom pad the document on insert so it gets put into a larger extent and then remove the padding. Really an anti pattern but it may give you some mileage.

    The final barrier is the maximum document size - 16MB you can't grow your data bigger than that.

  2. Lists of ReferenceFields - course_instances

    MongoDB doesn't have joins so it costs an extra query to look up a ReferenceField - essentially they are an in app join. Which isn't bad per-sae but its important to understand the tradeoff. By default mongoengine won't automatically dereference the field only doing course_version.course_instances will it do another query and then populate the whole list of references. So it can cost you another query - if you don't need the data then exclude() it from the query to stop any leaking queries.

  3. EmbeddedFields

    These fields are part of the document, so there is no cost for them, other than the wire costs of transmitting and loading the data. **As they are part of the document, you don't need select_related to get this data.

  4. teaching_element_instances

    Are these a list of id's? It says its a StringField in the code sample above. Either way, if you don't need to dereference the whole list then storing the _ids as a StringField and manually dereferencing may be more efficient if coded correctly - especially if you just need the latest (last?) id.

  5. Model complexity

    The CoursewareContainer is complex. For any given CourseVersion you have n CoursewareContainers with themselves have a list of n containers and those each have n containers and on...

  6. Finding the most recent instances

    We need to get all of the concepts so we can get the list of ids in teaching_element_instances to find the most recent one the user has worked on (if any) for that concept and then look up their progress.

    I'm unsure if there is a single instance you are after or one per Container or one per Course. Either way - the logic for querying the data should be examined. If its a single instance you are after - then that could be stored against the user so to simplify the logic of looking this up. If its per course or container then to improve performance ensure you minimise the number of queries - if possible collect all the ids and then at the end issue a single $in query, rather than doing a query per container.

  7. Mongoengine costs

    Currently, there is a performance cost to loading the data into Mongoengine classes - if you don't need the classes and are happy to work with simple dictionaries then either issue a raw pymongo query or use as_pymongo.

  8. Schema design

    The schema looks logical enough but is it suitable for the use case - in essence is it using MongoDB's strengths or is it putting a relational peg in a document database shaped hole? I can't answer than for you but I do know the way to the happy path with MongoDB is design the schema based on its use case. With relational databases schema design from the outset is simple - you normalise, with document databases how the data is used is a primary factor.

  9. MongoDB best practices

    There are many other best practices and mongodb have a guide which might be of interest: MongoDB Operations Best Practices.

Feel free to contact me via the Mongoengine mailing list to discuss further and if needs be discuss in private.

Ross

like image 141
Ross Avatar answered Oct 26 '22 20:10

Ross