Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to configure hadoop's mapper so that it takes <Text,IntWritable>

I'm using two mappers and two reducers. I'm getting the following error:

java.lang.ClassCastException: org.apache.hadoop.io.LongWritable cannot be cast to org.apache.hadoop.io.Text

This is because the first reducer writes <Text, IntWritable> and my second mapper is getting <Text,IntWritable> but, as i read, mappers take <LongWritable, Text> by default.

So, i have to set the input format with something like:

job2.setInputFormatClass(MyInputFormat.class);

Is there a way to set the InputFormat class to receive <Text,IntWritable>?

like image 717
Hernan Avatar asked Nov 17 '25 11:11

Hernan


2 Answers

The input types to your mapper are set by the InputFormat as you suspect.

Generally when you're chaining jobs together like this, its best to use SequenceFileOutputFormat and in the next job SequenceFileInputFormat. This way the types are handled for you and you set the types to be the same, ie the mappers inputs are the same as the previous reducers outputs.

like image 87
Binary Nerd Avatar answered Nov 19 '25 03:11

Binary Nerd


You don't need your own input format. All you need is to set SequenceFileOutputFormat for the first job and SequenceFileInputFormat for the second job.

TextInputFormat uses LongWritable keys and Text values, but SequenceFileInputFormat uses whatever types you used to store the output.

like image 42
vefthym Avatar answered Nov 19 '25 01:11

vefthym



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!