Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Mapping through two data sets with Hadoop

Suppose I have two key-value data sets--Data Sets A and B, let's call them. I want to update all the data in Set A with data from Set B where the two match on keys.

Because I'm dealing with such large quantities of data, I'm using Hadoop to MapReduce. My concern is that to do this key matching between A and B, I need to load all of Set A (a lot of data) into the memory of every mapper instance. That seems rather inefficient.

Would there be a recommended way to do this that doesn't require repeating the work of loading in A every time?

Some pseudcode to clarify what I'm currently doing:

Load in Data Set A # This seems like the expensive step to always be doing
Foreach key/value in Data Set B:
   If key is in Data Set A:
      Update Data Seta A
like image 618
bgcode Avatar asked Feb 20 '23 04:02

bgcode


2 Answers

According to the documentation, the MapReduce framework includes the following steps:

  1. Map
  2. Sort/Partition
  3. Combine (optional)
  4. Reduce

You've described one way to perform your join: loading all of Set A into memory in each Mapper. You're correct that this is inefficient.

Instead, observe that a large join can be partitioned into arbitrarily many smaller joins if both sets are sorted and partitioned by key. MapReduce sorts the output of each Mapper by key in step (2) above. Sorted Map output is then partitioned by key, so that one partition is created per Reducer. For each unique key, the Reducer will receive all values from both Set A and Set B.

To finish your join, the Reducer needs only to output the key and either the updated value from Set B, if it exists; otherwise, output the key and the original value from Set A. To distinguish between values from Set A and Set B, try setting a flag on the output value from the Mapper.

like image 138
Shahin Avatar answered Feb 24 '23 15:02

Shahin


All of the answers posted so far are correct - this should be a Reduce-side join... but there's no need to reinvent the wheel! Have you considered Pig, Hive, or Cascading for this? They all have joins built-in, and are fairly well optimized.

like image 45
Joe K Avatar answered Feb 24 '23 13:02

Joe K