I am trying to find out where does the output of a Map task is saved to disk before it can be used by a Reduce task.
Note: - version used is Hadoop 0.20.204 with the new API
For example, when overwriting the map method in the Map class:
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
context.write(word, one);
}
// code that starts a new Job.
}
I am interested to find out where does context.write() ends up writing the data. So far i've ran into the:
FileOutputFormat.getWorkOutputPath(context);
Which gives me the following location on hdfs:
hdfs://localhost:9000/tmp/outputs/1/_temporary/_attempt_201112221334_0001_m_000000_0
When i try to use it as input for another job it gives me the following error:
org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://localhost:9000/tmp/outputs/1/_temporary/_attempt_201112221334_0001_m_000000_0
Note: the job is started in the Mapper, so technically, the temporary folder where the Mapper task is writing it's output exists when the new job begins. Then again, it still says that the input path does not exist.
Any ideas to where the temporary output is written to? Or maybe what is the location where i can find the output of a Map task during a job that has both a Map and a Reduce stage?
Map reduce framework will store intermediate output into local disk rather than HDFS as this would cause unnecessarily replication of files.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With