I want to bring down a single datanode and tasktracker, so that some new changes that i've made in my mapred-site.xml take effect, such as mapred.reduce.child.java.opts etc. How do I do that? However I don't want to bring down the whole cluster since i have active jobs running.
Also, how can that be done ensuring that the namenode does not copy the relevant data blocks of a "temporarily down" datanode onto another node
By following methods we can restart the NameNode: You can stop the NameNode individually using /sbin/hadoop-daemon.sh stop namenode command. Then start the NameNode using /sbin/hadoop-daemon.sh start namenode. Use /sbin/stop-all.sh and the use /sbin/start-all.sh, command which will stop all the demons first.
If Namenode gets down then the whole Hadoop cluster is inaccessible and considered dead. Datanode stores actual data and works as instructed by Namenode. A Hadoop file system can have multiple data nodes but only one active Namenode.
To stop
You can stop the DataNodes and TaskTrackers from NameNode's hadoop bin directory.
./hadoop-daemon.sh stop tasktracker
./hadoop-daemon.sh stop datanode
So this script checks for slaves file in conf directory of hadoop to stop the DataNodes and same with the TaskTracker.
To start
Again this script checks for slaves file in conf directory of hadoop to start the DataNodes and TaskTrackers.
./hadoop-daemon.sh start tasktracker
./hadoop-daemon.sh start datanode
In Hadoop 2.7.2, tasktracker is long gone, to manually restart services out on slaves:
yarn-daemon.sh stop nodemanager
hadoop-daemon.sh stop datanode
hadoop-daemon.sh start datanode
yarn-daemon.sh start nodemanager
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With