What are the key differences to do map/reduce work on MongoDB using Hadoop map/reduce vs built-in map/reduce of Mongo?
When do I pick which map/reduce engine? what are the pros and cons of each engine to work on data stored in mongodb?
My answer is based on knowledge and experience of Hadoop MR and learning of Mongo DB MR. Lets see what are major differences and then try to define criteria for selection: Differences are:
From the above I can suggest the following criteria for selection:
Select Mongo DB MR if you need simple group by and filtering, do not expect heavy shuffling between map and reduce. In other words - something simple.
Select hadoop MR if you're going to do complicated, computationally intense MR jobs (for example some regressions calculations). Having a lot or unpredictable size of data between map and reduce also suggests Hadoop MR.
Java is a stronger language with more libraries, especially statistical. That should be taken into account.
As of MongoDB 2.4 MapReduce jobs are no longer single threaded.
Also, see the Aggregation Framework for a higher-performance, declarative way to perform aggregates and other analytical workloads in MongoDB.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With