Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to debug Spark application on Spark Standalone?

Tags:

I am trying to debug a Spark Application on a cluster using a master and several worker nodes. I have been successful at setting up the master node and worker nodes using Spark standalone cluster manager. I downloaded the spark folder with binaries and use the following commands to setup worker and master nodes. These commands are executed from the spark directory.

command for launching master

./sbin/start-master.sh 

command for launching worker node

./bin/spark-class org.apache.spark.deploy.worker.Worker master-URL 

command for submitting application

./sbin/spark-submit --class Application --master URL ~/app.jar 

Now, I would like to understand the flow of control through the Spark source code on the worker nodes when I submit my application(I just want to use one of the given examples that use reduce()). I am assuming I should setup Spark on Eclipse. The Eclipse setup link on the Apache Spark website seems to be broken. I would appreciate some guidance on setting up Spark and Eclipse to enable stepping through Spark source code on the worker nodes.

Thanks!

like image 987
RagHaven Avatar asked Mar 17 '15 03:03

RagHaven


People also ask

How do I debug the Spark application?

In order to start the application, select the Run -> Debug SparkLocalDebug, this tries to start the application by attaching to 5005 port. Now you should see your spark-submit application running and when it encounter debug breakpoint, you will get the control to IntelliJ.

How do I debug a failed Spark job?

Now you should be ready to debug. Simply start spark with the above command, then select the IntelliJ run configuration you just created and click Debug. IntelliJ should connect to your Spark application, which should now start running. You can set break points, inspect variables, etc.

How do you debug a Spark cluster?

start the job. open the Spark UI and find out where your process is running. use the ssh command to forward the port specified in the agent from the target node to your local machine through the edge node. start the remote debug from your IDE using as IP and port localhost and the forwarded port.


2 Answers

It's important to distinguish between debugging the driver program and debugging one of the executors. They require different options passed to spark-submit

For debugging the driver you can add the following to your spark-submit command. Then set your remote debugger to connect to the node you launched your driver program on.

--driver-java-options -agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005 

In this example port 5005 was specified, but you may need to customize that if something is already running on that port.

Connecting to an executor is similar, add the following options to your spark-submit command.

--num-executors 1 --executor-cores 1 --conf "spark.executor.extraJavaOptions=-agentlib:jdwp=transport=dt_socket,server=n,address=wm1b0-8ab.yourcomputer.org:5005,suspend=n" 

Replace the address with your local computer's address. (It's a good idea to test that you can access it from your spark cluster).

In this case, start your debugger in listening mode, then start your spark program and wait for the executor to attach to your debugger. It's important to set the number of executors to 1 or multiple executors will all try to connect to your debugger, likely causing problems.

These examples are for running with sparkMaster set as yarn-client although they may also work when running under mesos. If you're running using yarn-cluster mode you may have to set the driver to attach to your debugger rather than attaching your debugger to the driver, since you won't necessarily know in advance what node the driver will be executing on.

like image 163
whaleberg Avatar answered Sep 30 '22 17:09

whaleberg


You could run the Spark application in local mode if you just need to debug the logic of your transformations. This can be run in your IDE and you'll be able to debug like any other application:

val conf = new SparkConf().setMaster("local").setAppName("myApp") 

You're of course not distributing the problem with this setup. Distributing the problem is as easy as changing the master to point to your cluster.

like image 28
Mauricio Bustos Avatar answered Sep 30 '22 16:09

Mauricio Bustos