I'm running hadoop in a single-machine, local-only setup, and I'm looking for a nice, painless way to debug mappers and reducers in eclipse. Eclipse has no problem running mapreduce tasks. However, when I go to debug, it gives me this error :
12/03/28 14:03:23 WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
Okay, so I do some research. Apparently, I should use eclipse's remote debugging facility, and add this to my hadoop-env.sh
:
-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5000
I do that and I can step through my code in eclipse. Only problem is that, because of the "suspend=y", I can't use the "hadoop" command from the command line to do things like look at the job queue; it hangs, I'm imagining because it's waiting for a debugger to attach. Also, I can't run "hbase shell" when I'm in this mode, probably for the same reason.
So basically, if I want to flip back and forth between "debug mode" and "normal mode", I need to update hadoop-env.sh
and restart my machine. Major pain. So I have a few questions :
Is there an easier way to do debug mapreduce jobs in eclipse?
How come eclipse can run my mapreduce jobs just fine, but for debugging I need to use remote debugging?
Is there a way to tell hadoop to use remote debugging for mapreduce jobs, but to operate in normal mode for all other tasks? (such as "hadoop queue" or "hbase shell").
Is there an easier way to switch hadoop-env.sh
configurations without rebooting my machine? hadoop-env.sh is not executable by default.
This is a more general question : what exactly is happening when I run hadoop in local-only mode? Are there any processes on my machine that are "always on" and executing hadoop jobs? Or does hadoop only do things when I run the "hadoop" command from the command line? What is eclipse doing when I run a mapreduce job from eclipse? I had to reference hadoop-core
in my pom.xml
in order to make my project work. Is eclipse submitting jobs to my installed hadoop instance, or is it somehow running it all from the hadoop-core-1.0.0.jar
in my maven cache?
Here is my Main class :
public class Main {
public static void main(String[] args) throws Exception {
Job job = new Job();
job.setJarByClass(Main.class);
job.setJobName("FirstStage");
FileInputFormat.addInputPath(job, new Path("/home/sangfroid/project/in"));
FileOutputFormat.setOutputPath(job, new Path("/home/sangfroid/project/out"));
job.setMapperClass(FirstStageMapper.class);
job.setReducerClass(FirstStageReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
Make changes in /bin/hadoop
(hadoop-env.sh
) script. Check to see what command has been fired. If the command is jar
, then only add remote debug configuration.
if [ "$COMMAND" = "jar" ] ; then
exec "$JAVA" -Xdebug -Xrunjdwp:transport=dt_socket,server=y,address=8999 $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$@"
else
exec "$JAVA" $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$@"
fi
The only way you can debug hadoop in eclipse is running hadoop in local mode. The reason being, each map reduce task run in ist own JVM and when you don't hadoop in local mode, eclipse won't be able to debug.
When you set hadoop to local mode, instead of using hdfs API(which is default), hadoop file system changes to file:///
. Thus, running hadoop fs -ls
will not be a hdfs command, but more of hadoop fs -ls file:///
, a path to your local directory. None of the JobTracker or NameNode runs.
These blogposts might help:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With