I'm using two mappers and two reducers. I'm getting the following error:
java.lang.ClassCastException: org.apache.hadoop.io.LongWritable cannot be cast to org.apache.hadoop.io.Text
This is because the first reducer writes <Text, IntWritable> and my second mapper is getting <Text,IntWritable> but, as i read, mappers take <LongWritable, Text> by default.
So, i have to set the input format with something like:
job2.setInputFormatClass(MyInputFormat.class);
Is there a way to set the InputFormat class to receive <Text,IntWritable>?
The input types to your mapper are set by the InputFormat as you suspect.
Generally when you're chaining jobs together like this, its best to use SequenceFileOutputFormat and in the next job SequenceFileInputFormat. This way the types are handled for you and you set the types to be the same, ie the mappers inputs are the same as the previous reducers outputs.
You don't need your own input format. All you need is to set SequenceFileOutputFormat for the first job and SequenceFileInputFormat for the second job.
TextInputFormat uses LongWritable keys and Text values, but SequenceFileInputFormat uses whatever types you used to store the output.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With