The question may seem pretty obvious, but I have faced it many times, due to bad configuration of hosts file on a hadoop cluster.
Can anyone describe how to setup hosts file and other related network configuration for hadoop and similar environment usage (like cloudera).
Specially when i have to add both the hostname and FQDN
Update
Here is the host file of one of the machine from host name cdh4hdm have role of hadoop Master
127.0.0.1 cdh4hdm localhost
#127.0.1.1 cdh4hdm
# The following lines are desirable for IPv6 capable hosts
172.26.43.40 cdh4hdm.imp.co.in kdc1
172.26.43.41 cdh4hbm.imp.co.in
172.26.43.42 cdh4s1.imp.co.in
172.26.43.43 cdh4s2.imp.co.in
172.26.43.44 cdh4s3.imp.co.in
172.26.43.45 cdh4s4.imp.co.in
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
Please see image attached
Here on cluster some nodes are getting FQDN and some are getting hostname.
Also IP of hostname is not proper and showing 127.0.0.1 instead of host IP
Please suggest
For UBUNTU
Hosts File and other configuration for Hadoop Cluster
Provide hostname to all cluster machines, to do so add hostname in /etc/hostname file as
hostname-of-machine
On all the host, hosts file should be like this:
hosts
127.0.0.1 localhost
#127.0.1.1 localhost
<ip of host> FQDN hostname other_name
172.26.43.10 cdh4hdm.domain.com cdh4hdm kdc1
172.26.43.11 cdh4hbm.domain.com cdh4hbm
172.26.43.12 cdh4s1.domain.com cdh4s1
172.26.43.13 cdh4s2.domain.com cdh4s2
172.26.43.14 cdh4s3.domain.com cdh4s3
172.26.43.15 cdh4s4.domain.com cdh4s4
Note: Make sure to comment line 127.0.1.1 localhost it may create problem in zookeeper and cluster.
Add DNS server IP in /etc/resolv.conf
resolve.conf
search domain.com
nameserver 10.0.1.1
to verify configuration check hostfile and your should be able to ping all the machines by their hostname
To check hostname and FQDN on all machines run following commands:
hostname //should return the hostname
hostname -f //Fully Qualified Hostname
hostname -d //Domain name
All commands will be same for RHEL except the hostname.
Source1 and Source2
If you mean the /etc/hosts
file, then here is how I have set it in my hadoop cluster:
127.0.0.1 localhost
192.168.0.5 master
192.168.0.6 slave1
192.168.0.7 slave2
192.168.0.18 slave3
192.168.0.3 slave4
192.168.0.4 slave5 nameOfCurrentMachine
, where nameOfCurrentMachine
is the machine that this file is set, used as slave5
.
Some people say that the first line should be removed, but I have not faced any issues, nor have I tried removing it.
Then, the $HADOOP_CONF_DIR/masters
file in the master node should be:
master
and the $HADOOP_CONF_DIR/slaves
file in the master node should be:
slave1
slave2
slave3
slave4
slave5
In every other node, I have simply set these two files to contain just:
localhost
You should also make sure that you can ssh from master to every other node (using its name, not its IP) without a password. This post describes how to achieve that.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With