I have used Spark ML and was able to get reasonable accuracy in prediction for my business problem
The data is not huge and I was able to transform the input ( basically a csv file ) using stanford NLP and run Naive Bayes for prediction in my local machine.
I want to run this prediction service like a simple java main program or along with a simple MVC web application
Currently I run my prediction using the spark-submit command ? Instead , can I create spark context and data frames from my servlet / controller class ?
I could not find any documentation on such scenarios.
Kindly advise regarding the feasibility of the above
Spark jobs can be written in Java, Scala, Python, R, and SQL. It provides out of the box libraries for Machine Learning, Graph Processing, Streaming and SQL like data-processing.
Basics. Spark's shell provides a simple way to learn the API, as well as a powerful tool to analyze data interactively. It is available in either Scala (which runs on the Java VM and is thus a good way to use existing Java libraries) or Python.
Spark runs on Java 8/11, Scala 2.12, Python 2.7+/3.4+ and R 3.1+.
Standalone Spark runs on an embedded Jetty web server.
Spark has REST apis to submit jobs by invoking spark master hostname.
Submit an Application:
curl -X POST http://spark-cluster-ip:6066/v1/submissions/create --header "Content-Type:application/json;charset=UTF-8" --data '{
"action" : "CreateSubmissionRequest",
"appArgs" : [ "myAppArgument1" ],
"appResource" : "file:/myfilepath/spark-job-1.0.jar",
"clientSparkVersion" : "1.5.0",
"environmentVariables" : {
"SPARK_ENV_LOADED" : "1"
},
"mainClass" : "com.mycompany.MyJob",
"sparkProperties" : {
"spark.jars" : "file:/myfilepath/spark-job-1.0.jar",
"spark.driver.supervise" : "false",
"spark.app.name" : "MyJob",
"spark.eventLog.enabled": "true",
"spark.submit.deployMode" : "cluster",
"spark.master" : "spark://spark-cluster-ip:6066"
}
}'
Submission Response:
{
"action" : "CreateSubmissionResponse",
"message" : "Driver successfully submitted as driver-20151008145126-0000",
"serverSparkVersion" : "1.5.0",
"submissionId" : "driver-20151008145126-0000",
"success" : true
}
Get the status of a submitted application
curl http://spark-cluster-ip:6066/v1/submissions/status/driver-20151008145126-0000
Status Response
{
"action" : "SubmissionStatusResponse",
"driverState" : "FINISHED",
"serverSparkVersion" : "1.5.0",
"submissionId" : "driver-20151008145126-0000",
"success" : true,
"workerHostPort" : "192.168.3.153:46894",
"workerId" : "worker-20151007093409-192.168.3.153-46894"
}
Now in the spark application which you submit should perform all the operations and save output to any datasource and access the data via thrift server
as don't have much data to transfer(you can think of sqoop if you want to transfer data between your MVC app db and Hadoop cluster).
credits: link1, link2
Edit: (as per question in comment) build spark application jar with necessary dependencies and run the job in local mode. Write the jar in way to read the CSV and make use of MLib then store the prediction output in some data source to access it from web app.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With