When I read the document I found the following notes:
When a $sort immediately precedes a $limit in the pipeline, the $sort operation only maintains the top n results as it progresses, where n is the specified limit, and MongoDB only needs to store n items in memory. This optimization still applies when allowDiskUse is true and the n items exceed the aggregation memory limit.
If I'm right about this, it applies only when I use the $sort and $limit together like
db.coll.aggregate([ ..., {$sort: ...}, {$limit: limit}, ... ]);
However, I think most of the time we would have
db.coll.aggregate([ ..., {$sort: ...}, {$skip: skip}, {$limit: limit}, ... ]);
Question 1: Does it mean the rule above doesn't apply if I use $skip here?
I ask this question because theoretically MongoDB can still calculate the top n records and enhance performance by sorting only top n records. I didn't find any document about this though. And if the rule doesn't apply,
Question 2: Do I need to change my query to the following to enhance performance?
db.coll.aggregate([ ..., {$sort: ...}, {$limit: skip + limit}, {$skip: skip}, {$limit: limit}, ... ]);
EDIT: I think explains my use case would make the question above makes more sense. I'm using the text search feature provided by MongoDB 2.6 to look for products. I'm worried if the user inputs a very common key word like "red", there will be too many results returned. Thus I'm looking for better ways to generate this result.
EDIT2: It turns out that the last code above equals to
db.coll.aggregate([ ..., {$sort: ...}, {$limit: skip + limit}, {$skip: skip}, ... ]);
Thus I we can always use this form to make the top n rule apply.
The limit() function in MongoDB is used to specify the maximum number of results to be returned. Only one parameter is required for this function.to return the number of the desired result. Sometimes it is required to return a certain number of results after a certain number of documents. The skip() can do this job.
The Limit() Method To limit the records in MongoDB, you need to use limit() method. The method accepts one number type argument, which is the number of documents that you want to be displayed.
8. Which of the following functionality is used for aggregation framework? Explanation: For related projection functionality in the aggregation framework pipeline, use the $project pipeline stage.
Since this is a text search query we are talking about then the most optimal form is this:
db.collection.aggregate([ { "$match": { "$text": { "$search": "cake tea" } } }, { "$sort": { "score": { "$meta": "textScore" } } }, { "$limit": skip + limit }, { "$skip": skip } ])
The rationale on the memory reserve from the top "sort" results will only work within it's own "limits" as it were and this will not be optimal for anything beyond a few reasonable "pages" of data.
Beyond what is reasonable for memory consumption, the additional stage will likely have a negative effect rather than positive.
These really are the practical limitations of the text search capabilities available to MongoDB in the current form. But for anything more detailed and requiring more performance, then just as is the case with many SQL "full text" solutions, you are better off using an external "purpose built" text search solution.
I found that it seems the sequence of limit
and skip
is immaterial. If I specify skip
before limit
, the mongoDB will make limit
before skip
under the hood.
> db.system.profile.find().limit(1).sort( { ts : -1 } ).pretty() { "op" : "command", "ns" : "archiprod.userinfos", "command" : { "aggregate" : "userinfos", "pipeline" : [ { "$sort" : { "updatedAt" : -1 } }, { "$limit" : 625 }, { "$skip" : 600 } ], }, "keysExamined" : 625, "docsExamined" : 625, "cursorExhausted" : true, "numYield" : 4, "nreturned" : 25, "millis" : 25, "planSummary" : "IXSCAN { updatedAt: -1 }", /* Some fields are omitted */ }
What happens if I swtich $skip
and $limit
? I got the same result in terms of keysExamined
and docsExamined
.
> db.system.profile.find().limit(1).sort( { ts : -1 } ).pretty() { "op" : "command", "ns" : "archiprod.userinfos", "command" : { "aggregate" : "userinfos", "pipeline" : [ { "$sort" : { "updatedAt" : -1 } }, { "$skip" : 600 }, { "$limit" : 25 } ], }, "keysExamined" : 625, "docsExamined" : 625, "cursorExhausted" : true, "numYield" : 5, "nreturned" : 25, "millis" : 71, "planSummary" : "IXSCAN { updatedAt: -1 }", }
I then checked the explain result of the query. I found that totalDocsExamined
is already 625
in the limit
stage.
> db.userinfos.explain('executionStats').aggregate([ { "$sort" : { "updatedAt" : -1 } }, { "$limit" : 625 }, { "$skip" : 600 } ]) { "stages" : [ { "$cursor" : { "sort" : { "updatedAt" : -1 }, "limit" : NumberLong(625), "queryPlanner" : { "winningPlan" : { "stage" : "FETCH", "inputStage" : { "stage" : "IXSCAN", "keyPattern" : { "updatedAt" : -1 }, "indexName" : "updatedAt_-1", } }, }, "executionStats" : { "executionSuccess" : true, "nReturned" : 625, "executionTimeMillis" : 22, "totalKeysExamined" : 625, "totalDocsExamined" : 625, "executionStages" : { "stage" : "FETCH", "nReturned" : 625, "executionTimeMillisEstimate" : 0, "works" : 625, "advanced" : 625, "docsExamined" : 625, "inputStage" : { "stage" : "IXSCAN", "nReturned" : 625, "works" : 625, "advanced" : 625, "keyPattern" : { "updatedAt" : -1 }, "indexName" : "updatedAt_-1", "keysExamined" : 625, } } } } }, { "$skip" : NumberLong(600) } ] }
And surprisingly, I found switching the $skip
and $limit
results in the same explain
result.
> db.userinfos.explain('executionStats').aggregate([ { "$sort" : { "updatedAt" : -1 } }, { "$skip" : 600 }, { "$limit" : 25 } ]) { "stages" : [ { "$cursor" : { "sort" : { "updatedAt" : -1 }, "limit" : NumberLong(625), "queryPlanner" : { /* Omitted */ }, "executionStats" : { "executionSuccess" : true, "nReturned" : 625, "executionTimeMillis" : 31, "totalKeysExamined" : 625, "totalDocsExamined" : 625, /* Omitted */ } } }, { "$skip" : NumberLong(600) } ] }
As you can see, even though I specified $skip
before $limit
, in the explain
result, it's still $limit
before $skip
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With