I downloaded the VM from https://downloads.cloudera.com/demo_vm/vmware/cloudera-demo-vm-cdh4.0.0-vmware.tar.gz
I found that below listed services are running after the system boots.
hadoop-0.20-mapreduce-jobtracker
hadoop-0.20-mapreduce-tasktracker
hadoop-yarn-nodemanager
hadoop-yarn-resourcemanager
hadoop-mapreduce-historyserver
hadoop-hdfs-namenode
hadoop-hdfs-datanode
The word count example runs fine and generates the output as expected
/usr/bin/hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar wordcount input output
However, the above runs using the MRv2 - YARN framework
My goal is to run using MRv1. As suggested on the Cloudera documentation, I stop the MRV2 services, and edited /etc/hadoop/conf/mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property
to "classic" (also tried "local")
<property>
<name>mapreduce.framework.name</name>
<value>classic</value>
</property
I expected it to run using MRV1 (jobtracker and tasktracker). However, I see the following error:
12/10/10 21:48:39 INFO mapreduce.Cluster: Failed to use org.apache.hadoop.mapred.LocalClientProtocolProvider due to error: Invalid "mapreduce.jobtracker.address" configuration value for LocalJobRunner : "172.30.5.21:8021"
12/10/10 21:48:39 ERROR security.UserGroupInformation: PriviledgedActionException as:cloudera (auth:SIMPLE) cause:java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:121)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:83)
......
Can someone suggest what could be wrong. Why is the error pointing to invalid configuration?
I think your cluster still points to MRv2 configuration directory instead on MRv1.
Update/Install hadoop-conf
alternative in each node in the cluster pointing to MRv1 configuration directory with high priority.
Then restart all your services.
Eg:
$ sudo update-alternatives --install /etc/hadoop/conf hadoop-conf /etc/hadoop/conf.mrv1 50
$ sudo update-alternatives --set hadoop-conf /etc/hadoop/conf.mrv1
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With