I am trying to setup hadoop on my local machine and was following this. I have setup hadoop home also
This is the command I am trying to run now
hduser@ubuntu:~$ /usr/local/hadoop/bin/start-all.sh
And this is the error I get
-su: /usr/local/hadoop/bin/start-all.sh: No such file or directory
This is what I added to my $HOME/.bashrc file
# Set Hadoop-related environment variables
export HADOOP_HOME=/usr/local/hadoop
# Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on)
export JAVA_HOME=/usr/lib/jvm/java-8-oracle
# Some convenient aliases and functions for running Hadoop-related commands
unalias fs &> /dev/null
alias fs="hadoop fs"
unalias hls &> /dev/null
alias hls="fs -ls"
# If you have LZO compression enabled in your Hadoop cluster and
# compress job outputs with LZOP (not covered in this tutorial):
# Conveniently inspect an LZOP compressed file from the command
# line; run via:
#
# $ lzohead /hdfs/path/to/lzop/compressed/file.lzo
#
# Requires installed 'lzop' command.
#
lzohead () {
hadoop fs -cat $1 | lzop -dc | head -1000 | less
}
# Add Hadoop bin/ directory to PATH
export PATH=$PATH:$HADOOP_HOME/bin
EDIT After trying the solution given by mahendra I am getting the following output
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-mmt-HP-ProBook-430-G3.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-mmt-HP-ProBook-430-G3.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-mmt-HP-ProBook-430-G3.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-mmt-HP-ProBook-430-G3.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-mmt-HP-ProBook-430-G3.out
Try to run :
hduser@ubuntu:~$ /usr/local/hadoop/sbin/start-all.sh
Since start-all.sh and stop-all.sh located in sbin
directory while hadoop binary file is located in bin
directory.
Also updated your .bashrc
for:
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
so that you can directly access start-all.sh
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With