Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Hadoop - java.net.ConnectException: Connection refused

I want connect to hdfs (in localhost) and i have a error:

Call From despubuntu-ThinkPad-E420/127.0.1.1 to localhost:54310 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused

I follow all the steps in other posts, but i dont solve my problem. I use hadoop 2.7 and this is configurations:

core-site.xml

<configuration>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/despubuntu/hadoop/name/data</value>
  </property>

  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:54310</value>
  </property>
</configuration>

hdfs-site.xml

<configuration>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>
</configuration>

I type /usr/local/hadoop/bin/hdfs namenode -format and /usr/local/hadoop/sbin/start-all.sh

But when i type "jps" the result is:

10650 Jps
4162 Main
5255 NailgunRunner
20831 Launcher

I need help...

like image 914
Alex Avatar asked Apr 27 '15 20:04

Alex


2 Answers

Make sure that DFS which is set to port 9000 in core-site.xml is actually started. You can check with jps command. You can start it with sbin/start-dfs.sh

like image 56
nikk Avatar answered Sep 21 '22 18:09

nikk


I guess that you didn't set up your hadoop cluster correctly please follow these steps :

Step1: begin with setting up .bashrc:

vi $HOME/.bashrc

put the following lines at the end of the file: (change the hadoop home as yours)

# Set Hadoop-related environment variables
export HADOOP_HOME=/usr/local/hadoop

# Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on)
export JAVA_HOME=/usr/lib/jvm/java-6-sun

# Some convenient aliases and functions for running Hadoop-related commands
unalias fs &> /dev/null
alias fs="hadoop fs"
unalias hls &> /dev/null
alias hls="fs -ls"

# If you have LZO compression enabled in your Hadoop cluster and
# compress job outputs with LZOP (not covered in this tutorial):
# Conveniently inspect an LZOP compressed file from the command
# line; run via:
#
# $ lzohead /hdfs/path/to/lzop/compressed/file.lzo
#
# Requires installed 'lzop' command.
#
lzohead () {
    hadoop fs -cat $1 | lzop -dc | head -1000 | less
}

# Add Hadoop bin/ directory to PATH
export PATH=$PATH:$HADOOP_HOME/bin

step 2 : edit hadoop-env.sh as following:

# The java implementation to use.  Required.
export JAVA_HOME=/usr/lib/jvm/java-6-sun

step 3 : Now create a directory and set the required ownerships and permissions

$ sudo mkdir -p /app/hadoop/tmp
$ sudo chown hduser:hadoop /app/hadoop/tmp
# ...and if you want to tighten up security, chmod from 755 to 750...
$ sudo chmod 750 /app/hadoop/tmp

step 4 : edit core-site.xml

<property>
  <name>hadoop.tmp.dir</name>
  <value>/app/hadoop/tmp</value>
</property>

<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
</property>

step 5 : edit mapred-site.xml

<property>
  <name>mapred.job.tracker</name>
  <value>localhost:54311</value>
</property>

step 6 : edit hdfs-site.xml

<property>
  <name>dfs.replication</name>
  <value>1</value>
</property>

finally format your hdfs (You need to do this the first time you set up a Hadoop cluster)

 $ /usr/local/hadoop/bin/hadoop namenode -format

hope this will help you

like image 34
Yosser Abdellatif Goupil Avatar answered Sep 19 '22 18:09

Yosser Abdellatif Goupil