Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

MongoDB - Materialized View/OLAP Style Aggregation and Performance

Tags:

mongodb

nosql

I've been reading up on MongoDB. I am particularly interested in the aggregation frameworks ability. I am looking at taking multiple dataset consisting of at least 10+ million rows per month and creating aggregations off of this data. This is time series data.

Example. Using Oracle OLAP, you can load data at the second/minute level and have this roll up to hours, days, weeks, months, quarters, years etc...simply define your dimensions and go from there. This works quite well.

So far I have read that MongoDB can handle the above using it's map reduce functionality. Map reduce functionality can be implemented so that it updates results incrementally. This makes sense since I would be loading new data say weekly or monthly and I would expect to only have to process new data that is being loaded.

I have also read that map reduce in MongoDB can be slow. To overcome this, the idea is to use a cheap commodity hardware and spread the load across multiple machines.

So here are my questions.

  1. How good (or bad) does MongoDB handle map reduce in terms of performance? Do you really need a lot of machines to get acceptable performance?
  2. In terms of workflow, is it relatively easy to store and merge the incremental results generated by map reduce?
  3. How much of a performance improvement does the aggregation framework offer?
  4. Does the aggregation framework offer the ability to store results incrementally in a similar manner that the map/reduce functionality that already exists does.

I appreciate your responses in advance!

like image 799
Dave Avatar asked Aug 04 '12 18:08

Dave


2 Answers

How good (or bad) does MongoDB handle map reduce in terms of performance? Do you really need a lot of machines to get acceptable performance?

MongoDB's Map/Reduce implementation (as of 2.0.x) is limited by its reliance on the single-threaded SpiderMonkey JavaScript engine. There has been some experimentation with the v8 JavaScript engine and improved concurrency and performance is an overall design goal.

The new Aggregation Framework is written in C++ and has a more scalable implementation including a "pipeline" approach. Each pipeline is currently single-threaded, but you can run different pipelines in parallel. The aggregation framework won't currently replace all jobs that can be done in Map/Reduce, but does simplify a lot of common use cases.

A third option is to use MongoDB for storage in combination with Hadoop via the MongoDB Hadoop Connector. Hadoop currently has a more scalable Map/Reduce implementation and can access MongoDB collections for input and output via the Hadoop Connector.

In terms of workflow, is it relatively easy to store and merge the incremental results generated by map reduce?

Map/Reduce has several output options, including merging the incremental output into a previous output collection or returning the results inline (in memory).

How much of a performance improvement does the aggregation framework offer?

This really depends on the complexity of your Map/Reduce. Overall the aggregation framework is faster (and in some cases, significantly so). You're best doing a comparison for your own use case(s).

MongoDB 2.2 isn't officially released yet, but the 2.2rc0 release candidate has been available since mid-July.

Does the aggregation framework offer the ability to store results incrementally in a similar manner that the map/reduce functionality that already exists does.

The aggregation framework is currently limited to returning results inline so you have to process/display the results when they are returned. The result document is also restricted to the maximum document size in MongoDB (currently 16MB).

There is a proposed $out pipeline command (SERVER-3253) which will likely be added in future for more output options.

Some further reading that may be of interest:

  • a presentation at MongoDC 2011 on Time Series Data Storage in MongoDB
  • a presentation at MongoSF 2012 on MongoDB's New Aggregation Framework
  • capped collections, which could be used similar to RRD
like image 107
Stennie Avatar answered Sep 22 '22 17:09

Stennie


Couchbase map reduce is designed for building incremental indexes, which can then be dynamically queried for the level of rollup you are looking for (much like the Oracle example you gave in your question).

Here is a write up of how this is done using Couchbase: http://www.couchbase.com/docs/couchbase-manual-2.0/couchbase-views-sample-patterns-timestamp.html

like image 30
J Chris A Avatar answered Sep 23 '22 17:09

J Chris A