Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

MongoDB aggregation - $group by date even if doesn't exist

I have a query already which looks like this:

{$match:{
      "when":{$gt: new Date(ISODate().getTime() - 1000 * 60 * 60 * 24 * 30)}
}}, 
{$project:{
      "year":{$year:"$when"}, 
      "month":{$month:"$when"}, 
      "day": {$dayOfMonth:"$when"}
}}, 
{$group:{
      _id:{year:"$year", month:"$month", day:"$day"}, 
      "count":{$sum:1}
}},
{$sort:{
    _id: 1
}}

The result looks like this:

{ "_id" : { "year" : 2015, "month" : 10, "day" : 19 }, "count" : 1 }
{ "_id" : { "year" : 2015, "month" : 10, "day" : 21 }, "count" : 2 }

How could I get the result in the same format except having it for the last 30 days, even if count is 0?

Like this:

{ "_id" : { "year" : 2015, "month" : 10, "day" : 01 }, "count" : 1 }
{ "_id" : { "year" : 2015, "month" : 10, "day" : 02 }, "count" : 2 }
{ "_id" : { "year" : 2015, "month" : 10, "day" : 03 }, "count" : 0 }
...
{ "_id" : { "year" : 2015, "month" : 10, "day" : 30 }, "count" : 2 }
like image 979
Andrew Avatar asked Oct 21 '15 20:10

Andrew


People also ask

What field is mandatory in a $Group operation in MongoDB?

_id: This field is mandatory for grouping. If you specify the value of the _id field as null or a constant, the $group operator counts the accumulated values for all input documents as a whole.

Which aggregation method is preferred for use by MongoDB?

The pipeline provides efficient data aggregation using native operations within MongoDB, and is the preferred method for data aggregation in MongoDB. The aggregation pipeline can operate on a sharded collection.

Is aggregation slow in MongoDB?

Aggregation is slow - Working with Data - MongoDB Developer Community Forums.


1 Answers

Rather than trying to force the database to return results for data that does not exist it is a better practice to generate the blank data external to the query and merge the results into them. In that way you have your "0" entries where there is no data and allow the database to return what is there.

Merging is a basic process of creating a hashed table of unique keys and simply replacing any of the values found in the aggregation results in that hash table. In JavaScript a basic object suits well as all keys are unique.

I also prefer to actually return a Date object from aggregation results by using date math to manipulate and "round" the date to the required interval rather than using the date aggregation operators. You can manipulate dates by using $subtract to turn the value into a numeric timestamp representation by subtracting from another date with the epoch date value, and the $mod operator to get the remainder and round the date to the required interval.

In contrast using $add with a similar epoch date object will turn an integer value back into a BSON Date. And of course it is much more efficient to process directly to the $group rather than use a separate $project stage as you can just process the modified dates directly into the grouping _id value anyway.

As a shell example:

var sample = 30,
    Days = 30,
    OneDay = ( 1000 * 60 * 60 * 24 ),
    now = Date.now(),
    Today = now - ( now % OneDay ) ,
    nDaysAgo = Today - ( OneDay * Days ),
    startDate = new Date( nDaysAgo ),
    endDate = new Date( Today + OneDay ),
    store = {};

var thisDay = new Date( nDaysAgo );
while ( thisDay < endDate ) {
    store[thisDay] = 0;
    thisDay = new Date( thisDay.valueOf() + OneDay );
}

db.datejunk.aggregate([
    { "$match": { "when": { "$gte": startDate } }},
    { "$group": {
        "_id": {
            "$add": [
                { "$subtract": [
                    { "$subtract": [ "$when", new Date(0) ] },
                    { "$mod": [
                        { "$subtract": [ "$when", new Date(0) ] },
                        OneDay
                    ]}
                ]},
                new Date(0)
            ]
        },
        "count": { "$sum": 1 }
    }}
]).forEach(function(result){
    store[result._id] = result.count;
});

Object.keys(store).forEach(function(k) {
    printjson({ "date": k, "count": store[k] })
});

Which will return all days in the interval including 0 values where no data exists, like:

{ "date" : "Tue Sep 22 2015 10:00:00 GMT+1000 (AEST)", "count" : 0 }
{ "date" : "Wed Sep 23 2015 10:00:00 GMT+1000 (AEST)", "count" : 1 }
{ "date" : "Thu Sep 24 2015 10:00:00 GMT+1000 (AEST)", "count" : 0 }
{ "date" : "Fri Sep 25 2015 10:00:00 GMT+1000 (AEST)", "count" : 1 }
{ "date" : "Sat Sep 26 2015 10:00:00 GMT+1000 (AEST)", "count" : 1 }
{ "date" : "Sun Sep 27 2015 10:00:00 GMT+1000 (AEST)", "count" : 0 }
{ "date" : "Mon Sep 28 2015 10:00:00 GMT+1000 (AEST)", "count" : 1 }
{ "date" : "Tue Sep 29 2015 10:00:00 GMT+1000 (AEST)", "count" : 1 }
{ "date" : "Wed Sep 30 2015 10:00:00 GMT+1000 (AEST)", "count" : 0 }
{ "date" : "Thu Oct 01 2015 10:00:00 GMT+1000 (AEST)", "count" : 1 }
{ "date" : "Fri Oct 02 2015 10:00:00 GMT+1000 (AEST)", "count" : 2 }
{ "date" : "Sat Oct 03 2015 10:00:00 GMT+1000 (AEST)", "count" : 0 }
{ "date" : "Sun Oct 04 2015 11:00:00 GMT+1100 (AEST)", "count" : 1 }
{ "date" : "Mon Oct 05 2015 11:00:00 GMT+1100 (AEDT)", "count" : 0 }
{ "date" : "Tue Oct 06 2015 11:00:00 GMT+1100 (AEDT)", "count" : 1 }
{ "date" : "Wed Oct 07 2015 11:00:00 GMT+1100 (AEDT)", "count" : 2 }
{ "date" : "Thu Oct 08 2015 11:00:00 GMT+1100 (AEDT)", "count" : 2 }
{ "date" : "Fri Oct 09 2015 11:00:00 GMT+1100 (AEDT)", "count" : 1 }
{ "date" : "Sat Oct 10 2015 11:00:00 GMT+1100 (AEDT)", "count" : 1 }
{ "date" : "Sun Oct 11 2015 11:00:00 GMT+1100 (AEDT)", "count" : 1 }
{ "date" : "Mon Oct 12 2015 11:00:00 GMT+1100 (AEDT)", "count" : 0 }
{ "date" : "Tue Oct 13 2015 11:00:00 GMT+1100 (AEDT)", "count" : 3 }
{ "date" : "Wed Oct 14 2015 11:00:00 GMT+1100 (AEDT)", "count" : 2 }
{ "date" : "Thu Oct 15 2015 11:00:00 GMT+1100 (AEDT)", "count" : 2 }
{ "date" : "Fri Oct 16 2015 11:00:00 GMT+1100 (AEDT)", "count" : 0 }
{ "date" : "Sat Oct 17 2015 11:00:00 GMT+1100 (AEDT)", "count" : 3 }
{ "date" : "Sun Oct 18 2015 11:00:00 GMT+1100 (AEDT)", "count" : 0 }
{ "date" : "Mon Oct 19 2015 11:00:00 GMT+1100 (AEDT)", "count" : 0 }
{ "date" : "Tue Oct 20 2015 11:00:00 GMT+1100 (AEDT)", "count" : 0 }
{ "date" : "Wed Oct 21 2015 11:00:00 GMT+1100 (AEDT)", "count" : 2 }
{ "date" : "Thu Oct 22 2015 11:00:00 GMT+1100 (AEDT)", "count" : 1 }

Noting that all "date" values are actually still BSON dates, but just stringify like that in the ouput from .printjson() as a shell method.

A bit more concise example can be shown using nodejs where you can utilize operations like async.parallel to process both the hash construction and the aggregation query at the same time, as well as another useful utility in nedb which implements the "hash" using functions familiar to using a MongoDB collection. It also shows how this can scale for large results by using a real MongoDB collection if you also changed the handling to stream processing of the returned cursor from .aggregate():

var async = require('async'),
    mongodb = require('mongodb'),
    MongoClient = mongodb.MongoClient,
    nedb = require('nedb'),
    DataStore = new nedb();

// Setup vars
var sample = 30,
    Days = 30,
    OneDay = ( 1000 * 60 * 60 * 24 ),
    now = Date.now(),
    Today = now - ( now % OneDay ) ,
    nDaysAgo = Today - ( OneDay * Days ),
    startDate = new Date( nDaysAgo ),
    endDate = new Date( Today + OneDay );

MongoClient.connect('mongodb://localhost/test',function(err,db) {

  var coll = db.collection('datejunk');

  async.series(
    [
      // Clear test collection
      function(callback) {
        coll.remove({},callback)
      },

      // Generate a random sample
      function(callback) {
        var bulk = coll.initializeUnorderedBulkOp();

        while (sample--) {
          bulk.insert({
            "when": new Date(
              Math.floor(
                Math.random()*(Today-nDaysAgo+OneDay)+nDaysAgo
              )
            )
          });
        }
        bulk.execute(callback);
      },

      // Aggregate data and dummy data
      function(callback) {
        console.log("generated");
        async.parallel(
          [
            // Dummy data per day
            function(callback) {
              var thisDay = new Date( nDaysAgo );
              async.whilst(
                function() { return thisDay < endDate },
                function(callback) {
                  DataStore.update(
                    { "date": thisDay },
                    { "$inc": { "count": 0 } },
                    { "upsert": true },
                    function(err) {
                      thisDay = new Date( thisDay.valueOf() + OneDay );
                      callback(err);
                    }
                  );
                },
                callback
              );
            },
            // Aggregate data in collection
            function(callback) {
              coll.aggregate(
                [
                  { "$match": { "when": { "$gte": startDate } } },
                  { "$group": {
                    "_id": {
                      "$add": [
                        { "$subtract": [
                          { "$subtract": [ "$when", new Date(0) ] },
                          { "$mod": [
                            { "$subtract": [ "$when", new Date(0) ] },
                            OneDay
                          ]}
                        ]},
                        new Date(0)
                      ]
                    },
                    "count": { "$sum": 1 }
                  }}
                ],
                function(err,results) {
                  if (err) callback(err);
                  async.each(results,function(result,callback) {
                    DataStore.update(
                      { "date": result._id },
                      { "$inc": { "count": result.count } },
                      { "upsert": true },
                      callback
                    );
                  },callback);
                }
              );
            }
          ],
          callback
        );
      }
    ],
    // Return result or error
    function(err) {
      if (err) throw err;
      DataStore.find({},{ "_id": 0 })
        .sort({ "date": 1 })
        .exec(function(err,results) {
        if (err) throw err;
        console.log(results);
        db.close();
      });
    }
  );

});

This is very suited to data for charts and graphs. The basic procedure is the same for any language implementation, and ideally done in parallel processing for best performance, so async or threaded environments give you a real bonus even though for a small sample like this the basic hash table can be generated in memory very quickly of your enviroment requires sequential opertions.

So don'try and force the database to do this. There are certainly examples of SQL queries that do this "merge" on the database server, but it was never really a great idea there and should really be handled with a similar "client" merge process as it's just creating database overhead which really isn't required.

It's all very efficient and practical to the purpose, and of course it does not require processing a separate aggregation query for each day in the period, which would not be efficient at all.

like image 198
Blakes Seven Avatar answered Sep 23 '22 04:09

Blakes Seven