Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Hadoop: Connecting to ResourceManager failed

After installing hadoop 2.2 and trying to launch pipes example ive got the folowing error (the same error shows up after trying to launch hadoop jar hadoop-mapreduce-examples-2.2.0.jar wordcount someFile.txt /out):

/usr/local/hadoop$ hadoop pipes -Dhadoop.pipes.java.recordreader=true -Dhadoop.pipes.java.recordwriter=true -input someFile.txt -output /out -program bin/wordcount DEPRECATED: Use of this script to execute mapred command is deprecated. Instead use the mapred command for it.  13/12/14 20:12:06 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 13/12/14 20:12:06 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 13/12/14 20:12:07 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 13/12/14 20:12:08 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 13/12/14 20:12:09 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 13/12/14 20:12:10 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 13/12/14 20:12:11 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 13/12/14 20:12:12 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 13/12/14 20:12:13 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 13/12/14 20:12:14 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 

My yarn-site.xml:

<configuration> <property>   <name>yarn.nodemanager.aux-services</name>   <value>mapreduce_shuffle</value> </property> <property>   <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>   <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <!-- Site specific YARN configuration properties --> </configuration> 

core-site.xml:

<configuration> <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property> </configuration> 

mapred-site.xml:

<configuration> <property>   <name>mapreduce.framework.name</name>   <value>yarn</value> </property> </configuration> 

hdfs-site.xml:

<configuration> <property>   <name>dfs.replication</name>   <value>1</value> </property> <property>   <name>dfs.namenode.name.dir</name>   <value>file:/home/hduser/mydata/hdfs/namenode</value> </property> <property>   <name>dfs.datanode.data.dir</name>   <value>file:/home/hduser/mydata/hdfs/datanode</value> </property> </configuration> 

Ive figured out that my IPv6 is disabled as it should be. Maybe my /etc/hosts are not correct?

/etc/hosts:

fe00::0         ip6-localnet ff00::0         ip6-mcastprefix ff02::1         ip6-allnodes ff02::2         ip6-allrouters  127.0.0.1 localhost.localdomain localhost hduser # Auto-generated hostname. Please do not remove this comment. 79.98.30.76 356114.s.dedikuoti.lt  356114 ::1             localhost ip6-localhost ip6-loopback 
like image 550
user3102852 Avatar asked Dec 14 '13 18:12

user3102852


People also ask

What is ResourceManager in Hadoop?

As previously described, ResourceManager (RM) is the master that arbitrates all the available cluster resources and thus helps manage the distributed applications running on the YARN system. It works together with the per-node NodeManagers (NMs) and the per-application ApplicationMasters (AMs).

Where can I find yarn site XML?

HadoopCNV/configs/yarn-site. xml.


1 Answers

The problem connecting recource manager was because ive needed to add a few properties to yarn-site.xml :

<property> <name>yarn.resourcemanager.address</name> <value>127.0.0.1:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>127.0.0.1:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>127.0.0.1:8031</value> </property> 

Yet, my Jobs arent runing but connecting is successful now

like image 175
user3102852 Avatar answered Sep 17 '22 16:09

user3102852