I've been trying to setup a CDH4 installation of hadoop. I have 12 machines, labled hadoop01 - hadoop12, and the namenode, job tracker, and all data nodes have started fine. I'm able to view dfshealth.jsp and see that it's found all the data nodes.
However, whenever I try to start the secondary name node it gives an exception:
Starting Hadoop secondarynamenode: [ OK ]
starting secondarynamenode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-secondarynamenode-hadoop02.dev.terapeak.com.out
Exception in thread "main" java.lang.IllegalArgumentException: Invalid URI for NameNode address (check fs.defaultFS): file:/// has no authority.
at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:324)
at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:312)
at org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:305)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:222)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:186)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:578)
This is my hdfs-site.xml file on the secondary name node:
<configuration>
<property>
<name>dfs.name.dir</name>
<value>/data/1/dfs/nn</value>
</property>
<property>
<name>dfs.namenode.http-address</name>
<value>10.100.20.168:50070</value>
<description>
The address and the base port on which the dfs NameNode Web UI will listen.
If the port is 0, the server will start on a free port.
</description>
</property>
<property>
<name>dfs.namenode.checkpoint.check.period</name>
<value>3600</value>
</property>
<property>
<name>dfs.namenode.checkpoint.txns</name>
<value>40000</value>
</property>
<property>
<name>dfs.namenode.checkpoint.dir</name>
<value>/var/lib/hadoop-hdfs/cache</value>
</property>
<property>
<name>dfs.namenode.checkpoint.edits.dir</name>
<value>/var/lib/hadoop-hdfs/cache</value>
</property>
<property>
<name>dfs.namenode.num.checkpoints.retained</name>
<value>1</value>
</property>
<property>
<name>mapreduce.jobtracker.restart.recover</name>
<value>true</value>
</property>
</configuration>
It would seem like something is wrong with the value given to dfs.namenode.http-address, but I'm not sure what. Should it start with http:// or hdfs://? I tried calling 10.100.20.168:50070 in lynx and it displayed a page. Any ideas?
Looks like I was missing the core-site.xml configuration on the secondary name node. Added that and the process started properly.
core-site.xml:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://10.100.20.168/</value>
</property>
</configuration>
If you are running a single node cluster, then make sure you have set the HADOOP_PREFIX variable correctly as indicated in the link: http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html
Even i faced the same issue as yours and it got rectified by setting this variable
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With