Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Easiest way to install Python dependencies on Spark executor nodes?

I understand that you can send individual files as dependencies with Python Spark programs. But what about full-fledged libraries (e.g. numpy)?

Does Spark have a way to use a provided package manager (e.g. pip) to install library dependencies? Or does this have to be done manually before Spark programs are executed?

If the answer is manual, then what are the "best practice" approaches for synchronizing libraries (installation path, version, etc.) over a large number of distributed nodes?

like image 546
trianta2 Avatar asked Apr 07 '15 15:04

trianta2


People also ask

What is the Python library used to connect to spark?

Pyspark is a connection between Apache Spark and Python. It is a Spark Python API and helps you connect with Resilient Distributed Datasets (RDDs) to Apache Spark and Python.

How do I know what Python packages are installed on Linux?

The Pip, Pipenv, Anaconda Navigator, and Conda Package Managers can all be used to list installed Python packages. You can also use the ActiveState Platform's command line interface (CLI), the State Tool to list all installed packages using a simple “state packages” command.


1 Answers

Actually having actually tried it, I think the link I posted as a comment doesn't do exactly what you want with dependencies. What you are quite reasonably asking for is a way to have Spark play nicely with setuptools and pip regarding installing dependencies. It blows my mind that this isn't supported better in Spark. The third-party dependency problem is largely solved in general purpose Python, but under Spark, it seems the assumption is you'll go back to manual dependency management or something.

I have been using an imperfect but functional pipeline based on virtualenv. The basic idea is

  1. Create a virtualenv purely for your Spark nodes
  2. Each time you run a Spark job, run a fresh pip install of all your own in-house Python libraries. If you have set these up with setuptools, this will install their dependencies
  3. Zip up the site-packages dir of the virtualenv. This will include your library and it's dependencies, which the worker nodes will need, but not the standard Python library, which they already have
  4. Pass the single .zip file, containing your libraries and their dependencies as an argument to --py-files

Of course you would want to code up some helper scripts to manage this process. Here is a helper script adapted from one I have been using, which could doubtless be improved a lot:

#!/usr/bin/env bash # helper script to fulfil Spark's python packaging requirements. # Installs everything in a designated virtualenv, then zips up the virtualenv for using as an the value of # supplied to --py-files argument of `pyspark` or `spark-submit` # First argument should be the top-level virtualenv # Second argument is the zipfile which will be created, and #   which you can subsequently supply as the --py-files argument to  #   spark-submit # Subsequent arguments are all the private packages you wish to install # If these are set up with setuptools, their dependencies will be installed  VENV=$1; shift ZIPFILE=$1; shift PACKAGES=$*  . $VENV/bin/activate for pkg in $PACKAGES; do   pip install --upgrade $pkg done TMPZIP="$TMPDIR/$RANDOM.zip" # abs path. Use random number to avoid clashes with other processes ( cd "$VENV/lib/python2.7/site-packages" && zip -q -r $TMPZIP . ) mv $TMPZIP $ZIPFILE 

I have a collection of other simple wrapper scripts I run to submit my spark jobs. I simply call this script first as part of that process and make sure that the second argument (name of a zip file) is then passed as the --py-files argument when I run spark-submit (as documented in the comments). I always run these scripts, so I never end up accidentally running old code. Compared to the Spark overhead, the packaging overhead is minimal for my small scale project.

There are loads of improvements that could be made – eg being smart about when to create a new zip file, splitting it up two zip files, one containing often-changing private packages, and one containing rarely changing dependencies, which don't need to be rebuilt so often. You could be smarter about checking for file changes before rebuilding the zip. Also checking validity of arguments would be a good idea. However for now this suffices for my purposes.

The solution I have come up with is not designed for large-scale dependencies like NumPy specifically (although it may work for them). Also, it won't work if you are building C-based extensions, and your driver node has a different architecture to your cluster nodes.

I have seen recommendations elsewhere to just run a Python distribution like Anaconda on all your nodes since it already includes NumPy (and many other packages), and that might be the better way to get NumPy as well as other C-based extensions going. Regardless, we can't always expect Anaconda to have the PyPI package we want in the right version, and in addition you might not be able to control your Spark environment to be able to put Anaconda on it, so I think this virtualenv-based approach is still helpful.

like image 135
Andy MacKinlay Avatar answered Oct 16 '22 00:10

Andy MacKinlay