Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Apache Spark application deployment best practices

Tags:

I have a couple of use cases for Apache Spark applications/scripts, generally of the following form:

General ETL use case - more specifically a transformation of a Cassandra column family containing many events (think event sourcing) into various aggregated column families.

Streaming use case - realtime analysis of the events as they arrive in the system.

For (1), I'll need to kick off the Spark application periodically.

For (2), just kick off the long running Spark Streaming process at boot time and let it go.

(Note - I'm using Spark Standalone as the cluster manager, so no yarn or mesos)

I'm trying to figure out the most common / best practice deployment strategies for Spark applications.

So far the options I can see are:

  1. Deploying my program as a jar, and running the various tasks with spark-submit - which seems to be the way recommended in the spark docs. Some thoughts about this strategy:

    • how do you start/stop tasks - just using simple bash scripts?
    • how is scheduling managed? - simply use cron?
    • any resilience? (e.g. Who schedules the jobs to run if the driver server dies?)
  2. Creating a separate webapp as the driver program.

    • creates a spark context programmatically to talk to the spark cluster
    • allowing users to kick off tasks through the http interface
    • using Quartz (for example) to manage scheduling
    • could use cluster with zookeeper election for resilience
  3. Spark job server (https://github.com/ooyala/spark-jobserver)

    • I don't think there's much benefit over (2) for me, as I don't (yet) have many teams and projects talking to Spark, and would still need some app to talk to job server anyway
    • no scheduling built in as far as I can see

I'd like to understand the general consensus w.r.t a simple but robust deployment strategy - I haven't been able to determine one by trawling the web, as of yet.

Thanks very much!

like image 587
lucas1000001 Avatar asked May 23 '15 13:05

lucas1000001


People also ask

Which service should you use to run Apache Spark applications?

Oracle Cloud Infrastructure (OCI) Data Flow is a fully managed Big Data service that lets you run Apache Spark applications at any scale with no administration.


1 Answers

Even though you are not using Mesos for Spark, you could have a look at

-Chronos offering a distributed and fault tolerant cron

-Marathon a Mesos framework for long running applications

Note that this doesn't mean you have to move your spark deployment to mesos, e.g. you could just use chronos to trigger the spark -submit.

I hope I understood your problem correctly and this helps you a bit!

like image 167
js84 Avatar answered Oct 13 '22 01:10

js84