I have IntelliJ IDEA set up with Apache Spark 1.4.
I want to be able to add debug points to my Spark Python scripts so that I can debug them easily.
I am currently running this bit of Python to initialise the spark process
proc = subprocess.Popen([SPARK_SUBMIT_PATH, scriptFile, inputFile], shell=SHELL_OUTPUT, stdout=subprocess.PIPE)
if VERBOSE:
print proc.stdout.read()
print proc.stderr.read()
When spark-submit
eventually calls myFirstSparkScript.py
, the debug mode is not engaged and it executes as normal. Unfortunately, editing the Apache Spark source code and running a customised copy is not an acceptable solution.
Does anyone know if it is possible to have spark-submit call the Apache Spark script in debug mode? If so, how?
Go to the Spark Installation directory from the command line and type bin/pyspark and press enter, this launches pyspark shell and gives you a prompt to interact with Spark in Python language. If you have set the Spark in a PATH then just enter pyspark in command line or terminal (mac users).
PySpark is an interface for Apache Spark in Python. It not only allows you to write Spark applications using Python APIs, but also provides the PySpark shell for interactively analyzing your data in a distributed environment.
PySpark is a Python interface for Apache Spark that allows you to tame Big Data by combining the simplicity of Python with the power of Apache Spark. As we know Spark is built on Hadoop/HDFS and is mainly written in Scala, a functional programming language akin to Java.
As far as I understand your intentions what you want is not directly possible given Spark architecture. Even without subprocess
call the only part of your program that is accessible directly on a driver is a SparkContext
. From the rest you're effectively isolated by different layers of communication, including at least one (in the local mode) JVM instance. To illustrate that, lets use a diagram from PySpark Internals documentation.
What is in the left box is the part that is accessible locally and could be used to attach a debugger. Since it is most limited to JVM calls there is really nothing there that should of interest for you, unless you're actually modifying PySpark itself.
What is on the right happens remotely and depending on a cluster manager you use is pretty much a black-box from an user perspective. Moreover there are many situations when Python code on the right does nothing more than calling JVM API.
This is was the bad part. The good part is that most of the time there should be no need for remote debugging. Excluding accessing objects like TaskContext
, which can be easily mocked, every part of your code should be easily runnable / testable locally without using Spark instance whatsoever.
Functions you pass to actions / transformations take standard and predictable Python objects and are expected to return standard Python objects as well. What is also important these should be side effects free
So at the end of the day you have to parts of your program - a thin layer that can be accessed interactively and tested based purely on inputs / outputs and "computational core" which doesn't require Spark for testing / debugging.
That being said, you're not completely out of options here.
(passively attach debugger to a running interpreter)
Both plain GDB and PySpark debugger can be attached to a running process. This can be done only, once PySpark daemon and /or worker processes have been started. In local mode you can force it by executing a dummy action, for example:
sc.parallelize([], n).count()
where n
is a number of "cores" available in the local mode (local[n]
). Example procedure step-by-step on Unix-like systems:
Start PySpark shell:
$SPARK_HOME/bin/pyspark
Use pgrep
to check there is no daemon process running:
➜ spark-2.1.0-bin-hadoop2.7$ pgrep -f pyspark.daemon
➜ spark-2.1.0-bin-hadoop2.7$
The same thing can be determined in PyCharm by:
alt+shift+a and choosing Attach to Local Process:
or Run -> Attach to Local Process.
At this point you should see only PySpark shell (and possibly some unrelated processes).
Execute dummy action:
sc.parallelize([], 1).count()
Now you should see both daemon
and worker
(here only one):
➜ spark-2.1.0-bin-hadoop2.7$ pgrep -f pyspark.daemon
13990
14046
➜ spark-2.1.0-bin-hadoop2.7$
and
The process with lower pid
is a daemon, the one with higher pid
is (possibly) ephemeral worker.
At this point you can attach debugger to a process of interest:
With plain GDB by calling:
gdb python <pid of running process>
The biggest disadvantage of this approach is that you have find the right interpreter at the right moment.
(Using active component which connects to debugger server)
With PyCharmPyCharm provides Python Debug Server which can be used with PySpark jobs.
First of all you should add a configuration for remote debugger:
Configure host and port according to your own configuration (make sure that port and be reached from a remote machine)
Start debug server:
shift+F9
You should see debugger console:
Make sure that pyddev
is accessible on the worker nodes, either by installing it or distributing the egg
file.
pydevd
uses an active component which has to be included in your code:
import pydevd
pydevd.settrace(<host name>, port=<port number>)
The tricky part is to find the right place to include it and unless you debug batch operations (like functions passed to mapPartitions
) it may require patching PySpark source itself, for example pyspark.daemon.worker
or RDD
methods like RDD.mapPartitions
. Let's say we are interested in debugging worker behavior. Possible patch can look like this:
diff --git a/python/pyspark/daemon.py b/python/pyspark/daemon.py
index 7f06d4288c..6cff353795 100644
--- a/python/pyspark/daemon.py
+++ b/python/pyspark/daemon.py
@@ -44,6 +44,9 @@ def worker(sock):
"""
Called by a worker process after the fork().
"""
+ import pydevd
+ pydevd.settrace('foobar', port=9999, stdoutToServer=True, stderrToServer=True)
+
signal.signal(SIGHUP, SIG_DFL)
signal.signal(SIGCHLD, SIG_DFL)
signal.signal(SIGTERM, SIG_DFL)
If you decide to patch Spark source be sure to use patched source not packaged version which is located in $SPARK_HOME/python/lib
.
Execute PySpark code. Go back to the debugger console and have fun:
There is a number of tools, including python-manhole or pyrasite
which can be used, with some effort, to work with PySpark.
Note:
Of course, you can use "remote" (active) methods with local mode and, up to some extent "local" methods with distributed mode (you can connect to the worker node and follow the same steps as in the local mode).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With