In the "Hadoop : The Definitive Guide" book, there is a sample program with the below code.
JobConf conf = new JobConf(MaxTemperature.class);
conf.setJobName("Max temperature");
FileInputFormat.addInputPath(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));
conf.setMapperClass(MaxTemperatureMapper.class);
conf.setReducerClass(MaxTemperatureReducer.class);
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
The MR framework should be able to figure out the output key and value class from the Mapper and the Reduce functions which are being set on the JobConf class. Why do we need to explicitly set the output key and value class on the JobConf class? Also, there is no similar API for the input key/value pair.
The reason is type erasure[1]. You set the output K/V classes as generics. During job setup (which is run time, not compile time), these generics are erased.
The input k/v classes can be read from the input file, in the case of SequenceFiles the classes are in the header- you can read them when opening a sequence file in the editor. This header must be written, since every map output is a SequenceFile, so you need to provide the classes.
[1] http://download.oracle.com/javase/tutorial/java/generics/erasure.html
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With