Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

run spark as java web application

I have used Spark ML and was able to get reasonable accuracy in prediction for my business problem

The data is not huge and I was able to transform the input ( basically a csv file ) using stanford NLP and run Naive Bayes for prediction in my local machine.

I want to run this prediction service like a simple java main program or along with a simple MVC web application

Currently I run my prediction using the spark-submit command ? Instead , can I create spark context and data frames from my servlet / controller class ?

I could not find any documentation on such scenarios.

Kindly advise regarding the feasibility of the above

like image 713
lives Avatar asked Oct 13 '16 20:10

lives


People also ask

Can we use Spark with Java?

Spark jobs can be written in Java, Scala, Python, R, and SQL. It provides out of the box libraries for Machine Learning, Graph Processing, Streaming and SQL like data-processing.

Can I use Java on Spark shell?

Basics. Spark's shell provides a simple way to learn the API, as well as a powerful tool to analyze data interactively. It is available in either Scala (which runs on the Java VM and is thus a good way to use existing Java libraries) or Python.

Can Spark run on Java 11?

Spark runs on Java 8/11, Scala 2.12, Python 2.7+/3.4+ and R 3.1+.

Does Spark use Jetty?

Standalone Spark runs on an embedded Jetty web server.


1 Answers

Spark has REST apis to submit jobs by invoking spark master hostname.

Submit an Application:

curl -X POST http://spark-cluster-ip:6066/v1/submissions/create --header "Content-Type:application/json;charset=UTF-8" --data '{
  "action" : "CreateSubmissionRequest",
  "appArgs" : [ "myAppArgument1" ],
  "appResource" : "file:/myfilepath/spark-job-1.0.jar",
  "clientSparkVersion" : "1.5.0",
  "environmentVariables" : {
    "SPARK_ENV_LOADED" : "1"
  },
  "mainClass" : "com.mycompany.MyJob",
  "sparkProperties" : {
    "spark.jars" : "file:/myfilepath/spark-job-1.0.jar",
    "spark.driver.supervise" : "false",
    "spark.app.name" : "MyJob",
    "spark.eventLog.enabled": "true",
    "spark.submit.deployMode" : "cluster",
    "spark.master" : "spark://spark-cluster-ip:6066"
  }
}'

Submission Response:

{
  "action" : "CreateSubmissionResponse",
  "message" : "Driver successfully submitted as driver-20151008145126-0000",
  "serverSparkVersion" : "1.5.0",
  "submissionId" : "driver-20151008145126-0000",
  "success" : true
}

Get the status of a submitted application

curl http://spark-cluster-ip:6066/v1/submissions/status/driver-20151008145126-0000

Status Response

{
  "action" : "SubmissionStatusResponse",
  "driverState" : "FINISHED",
  "serverSparkVersion" : "1.5.0",
  "submissionId" : "driver-20151008145126-0000",
  "success" : true,
  "workerHostPort" : "192.168.3.153:46894",
  "workerId" : "worker-20151007093409-192.168.3.153-46894"
}

Now in the spark application which you submit should perform all the operations and save output to any datasource and access the data via thrift server as don't have much data to transfer(you can think of sqoop if you want to transfer data between your MVC app db and Hadoop cluster).

credits: link1, link2

Edit: (as per question in comment) build spark application jar with necessary dependencies and run the job in local mode. Write the jar in way to read the CSV and make use of MLib then store the prediction output in some data source to access it from web app.

like image 94
mrsrinivas Avatar answered Sep 18 '22 04:09

mrsrinivas