Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Find largest document size in MongoDB

Tags:

mongodb

Is it possible to find the largest document size in MongoDB?

db.collection.stats() shows average size, which is not really representative because in my case sizes can differ considerably.

like image 655
sashkello Avatar asked Jun 06 '13 03:06

sashkello


People also ask

How do I find the largest file in MongoDB?

All you have to do is provide the MongoDB connection string and collection name. The script will output the top X largest documents when it finishes traversing the entire collection in batches. This is exactly what the built in cursor allows for. It streams the data rather than storing the entire collection to ram.

What is the MongoDB maximum document size?

Document Size Limit The maximum BSON document size is 16 megabytes. The maximum document size helps ensure that a single document cannot use excessive amount of RAM or, during transmission, excessive amount of bandwidth. To store documents larger than the maximum size, MongoDB provides the GridFS API.

What is the use of GridFS in MongoDB?

GridFS is the MongoDB specification for storing and retrieving large files such as images, audio files, video files, etc. It is kind of a file system to store files but its data is stored within MongoDB collections. GridFS has the capability to store files even greater than its document size limit of 16MB.


6 Answers

You can use a small shell script to get this value.

Note: this will perform a full table scan, which will be slow on large collections.

let max = 0, id = null;
db.test.find().forEach(doc => {
    const size = Object.bsonsize(doc); 
    if(size > max) {
        max = size;
        id = doc._id;
    } 
});
print(id, max);
like image 135
Abhishek Kumar Avatar answered Oct 03 '22 14:10

Abhishek Kumar


Note: this will attempt to store the whole result set in memory (from .toArray) . Careful on big data sets. Do not use in production! Abishek's answer has the advantage of working over a cursor instead of across an in memory array.

If you also want the _id, try this. Given a collection called "requests" :

// Creates a sorted list, then takes the max
db.requests.find().toArray().map(function(request) { return {size:Object.bsonsize(request), _id:request._id}; }).sort(function(a, b) { return a.size-b.size; }).pop();

// { "size" : 3333, "_id" : "someUniqueIdHere" }
like image 34
Mike Graf Avatar answered Oct 03 '22 14:10

Mike Graf


Finding the largest documents in a MongoDB collection can be ~100x faster than the other answers using the aggregation framework and a tiny bit of knowledge about the documents in the collection. Also, you'll get the results in seconds, vs. minutes with the other approaches (forEach, or worse, getting all documents to the client).

You need to know which field(s) in your document might be the largest ones - which you almost always will know. There are only two practical1 MongoDB types that can have variable sizes:

  • arrays
  • strings

The aggregation framework can calculate the length of each. Note that you won't get the size in bytes for arrays, but the length in elements. However, what matters more typically is which the outlier documents are, not exactly how many bytes they take.

Here's how it's done for arrays. As an example, let's say we have a collections of users in a social network and we suspect the array friends.ids might be very large (in practice you should probably keep a separate field like friendsCount in sync with the array, but for the sake of example, we'll assume that's not available):

db.users.aggregate([
    { $match: {
        'friends.ids': { $exists: true }
    }},
    { $project: { 
        sizeLargestField: { $size: '$friends.ids' } 
    }},
    { $sort: {
        sizeLargestField: -1
    }},
])

The key is to use the $size aggregation pipeline operator. It only works on arrays though, so what about text fields? We can use the $strLenBytes operator. Let's say we suspect the bio field might also be very large:

db.users.aggregate([
    { $match: {
        bio: { $exists: true }
    }},
    { $project: { 
        sizeLargestField: { $strLenBytes: '$bio' } 
    }},
    { $sort: {
        sizeLargestField: -1
    }},
])

You can also combine $size and $strLenBytes using $sum to calculate the size of multiple fields. In the vast majority of cases, 20% of the fields will take up 80% of the size (if not 10/90 or even 1/99), and large fields must be either strings or arrays.


1 Technically, the rarely used binData type can also have variable size.

like image 39
Dan Dascalescu Avatar answered Oct 03 '22 14:10

Dan Dascalescu


Starting Mongo 4.4, the new aggregation operator $bsonSize returns the size in bytes of a given document when encoded as BSON.

Thus, in order to find the bson size of the document whose size is the biggest:

// { "_id" : ObjectId("5e6abb2893c609b43d95a985"), "a" : 1, "b" : "hello" }
// { "_id" : ObjectId("5e6abb2893c609b43d95a986"), "c" : 1000, "a" : "world" }
// { "_id" : ObjectId("5e6abb2893c609b43d95a987"), "d" : 2 }
db.collection.aggregate([
  { $group: {
    _id: null,
    max: { $max: { $bsonSize: "$$ROOT" } }
  }}
])
// { "_id" : null, "max" : 46 }

This:

  • $groups all items together
  • $projects the $max of documents' $bsonSize
  • $$ROOT represents the current document for which we get the bsonsize
like image 30
Xavier Guihot Avatar answered Oct 03 '22 13:10

Xavier Guihot


Well.. this is an old question.. but - I thought to share my cent about it

My approach - use Mongo mapReduce function

First - let's get the size for each document

db.myColection.mapReduce
(
   function() { emit(this._id, Object.bsonsize(this)) }, // map the result to be an id / size pair for each document
   function(key, val) { return val }, // val = document size value (single value for each document)
   { 
       query: {}, // query all documents
       out: { inline: 1 } // just return result (don't create a new collection for it)
   } 
)

This will return all documents sizes although it worth mentioning that saving it as a collection is a better approach (the result is an array of results inside the result field)

Second - let's get the max size of document by manipulating this query

db.metadata.mapReduce
(
    function() { emit(0, Object.bsonsize(this))}, // mapping a fake id (0) and use the document size as value
    function(key, vals) { return Math.max.apply(Math, vals) }, // use Math.max function to get max value from vals (each val = document size)
    { query: {}, out: { inline: 1 } } // same as first example
)

Which will provide you a single result with value equals to the max document size

In short:

you may want to use the first example and save its output as a collection (change out option to the name of collection you want) and applying further aggregations on it (max size, min size, etc.)

-OR-

you may want to use a single query (the second option) for getting a single stat (min, max, avg, etc.)

like image 40
ymz Avatar answered Oct 03 '22 13:10

ymz


If you're working with a huge collection, loading it all at once into memory will not work, since you'll need more RAM than the size of the entire collection for that to work.

Instead, you can process the entire collection in batches using the following package I created: https://www.npmjs.com/package/mongodb-largest-documents

All you have to do is provide the MongoDB connection string and collection name. The script will output the top X largest documents when it finishes traversing the entire collection in batches.

Preview

like image 40
Elad Nava Avatar answered Oct 03 '22 13:10

Elad Nava