I have configured two nodes cluster of linux distribution running parallel on a virtual box.
The contents of /etc/hosts file as follows in two linux distro
hduser@ubuntu-master:~$ cat /etc/hosts
192.168.56.103 Ubuntu-Master master
192.168.56.102 LinuxMint-Slave slave
10.33.136.219 inkod2lp00100.techmahindra.com inkod2lp00100
hduser@LinuxMint-Slave ~ $ cat /etc/hosts
192.168.56.103 Ubuntu-Master master
192.168.56.102 LinuxMint-Slave slave
10.33.136.219 inkod2lp00100.techmahindra.com inkod2lp00100
The contents of hbase-site.xml ( Location - /usr/local/hbase/conf) in two linux distro as follows-
hduser@ubuntu-master:~$ cat /usr/local/hbase/conf/hbase-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hbase.tmp.dir</name>
<value>file:///usr/local/hbase/hbasetmp/hbase-${user.name}</value>
</property>
<property>
<name>hbase.master</name>
<value>Ubuntu-Master:16000</value>
<description>The host and port that the HBase master runs at.</description>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://Ubuntu-Master:54310/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>Ubuntu-Master,LinuxMint-Slave</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>file:///usr/local/hbase/zookeeperdata</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2222</value>
</property>
</configuration>
hduser@LinuxMint-Slave ~ $ cat /usr/local/hbase/conf/hbase-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hbase.tmp.dir</name>
<value>file:///usr/local/hbase/hbasetmp/hbase-${user.name}</value>
</property>
<property>
<name>hbase.master</name>
<value>Ubuntu-Master:16000</value>
<description>The host and port that the HBase master runs at.</description>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://Ubuntu-Master:54310/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>Ubuntu-Master,LinuxMint-Slave</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>file:///usr/local/hbase/zookeeperdata</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2222</value>
</property>
</configuration>
But when I am starting the HBase services in the Master Node, every time HMaster is not being started, after initial startup HMaster fails
Please check the service status:
hduser@ubuntu-master:~$ jps
3793 SecondaryNameNode
5332 HQuorumPeer
4006 ResourceManager
4134 NodeManager
4883 JobHistoryServer
6286 Jps
3512 NameNode
3637 DataNode
5535 HRegionServer
hduser@LinuxMint-Slave ~ $ jps
2504 DataNode
3175 HQuorumPeer
2651 NodeManager
3681 Jps
3291 HRegionServer
AND Here the log file for HMaster service
2015-02-03 12:21:14,168 WARN [Thread-12] hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1471) ………………….
……………………..
2015-02-03 12:21:14,185 DEBUG [master:Ubuntu-Master:60000] util.FSUtils: Unable to create version file at hdfs://Ubuntu-Master:54310/hbase, retrying
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1471)
…………………………………………
2015-02-03 12:21:24,285 WARN [Thread-15] hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1471)
……………………………………………………….
2015-02-03 12:21:24,286 DEBUG [master:Ubuntu-Master:60000] util.FSUtils: Unable to create version file at hdfs://Ubuntu-Master:54310/hbase, retrying
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1471)
……………………………………………..
2015-02-03 12:21:34,312 WARN [Thread-17] hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1471)
……………………………………………………………….
2015-02-03 12:21:44,333 FATAL [master:Ubuntu-Master:60000] master.HMaster: Unhandled exception. Starting shutdown.
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and no node(s) are excluded in this operation.
…………………………………….
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
2015-02-03 12:21:44,334 INFO [master:Ubuntu-Master:60000] master.HMaster: Aborting
2015-02-03 12:21:44,334 DEBUG [master:Ubuntu-Master:60000] master.HMaster: Stopping service threads
2015-02-03 12:21:44,335 INFO [master:Ubuntu-Master:60000] ipc.RpcServer: Stopping server on 60000
2015-02-03 12:21:44,335 INFO [RpcServer.listener,port=60000] ipc.RpcServer: RpcServer.listener,port=60000: stopping
2015-02-03 12:21:44,339 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopped
2015-02-03 12:21:44,339 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopping
2015-02-03 12:21:44,339 INFO [master:Ubuntu-Master:60000] master.HMaster: Stopping infoServer
2015-02-03 12:21:44,364 INFO [master:Ubuntu-Master:60000] mortbay.log: Stopped [email protected]:60010
2015-02-03 12:21:44,508 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down
2015-02-03 12:21:44,509 INFO [master:Ubuntu-Master:60000] zookeeper.ZooKeeper: Session: 0x14b4e1d0a040002 closed
2015-02-03 12:21:44,510 INFO [master:Ubuntu-Master:60000] master.HMaster: HMaster main thread exiting
2015-02-03 12:21:44,510 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: HMaster Aborted
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:194)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:135)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2803)
2015-02-03 12:21:44,515 ERROR [Thread-5] hdfs.DFSClient: Failed to close file /hbase/.tmp/hbase.version
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1471)
I had the exact same issue recently.
The way to overcome it is pretty simple but DANGEROUS.
You will definitely lose all your data on HDFS.
You should do the following:
stop-hbase.sh && stop-yarn.sh && stop-dfs.sh
/etc/hadoop/hdfs-site.xml
, in my case the folders I had to delete were /home/hadoop/hadoopdata/hdfs/namenode
and /home/hadoop/hadoopdata/hdfs/datanode
. Instead you can simply delete the /home/hadoop/hadoopdata
directory on both servers.
Here is the piece of the config file you may need to look for:
<property>
<name>dfs.name.dir</name>
<value>file:///home/hadoop/hadoopdata/hdfs/namenode</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:///home/hadoop/hadoopdata/hdfs/datanode</value>
</property>
Run on master: hadoop namenode -format
(the namenode part may be different for you).
Run on slave: hadoop datanode -format
(the datanode part may be different for you once again).
Start hadoop and other services: start-dfs.sh && start-yarn.sh && start-hbase.sh
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With