Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Spark submit does automatically upload the jar to cluster?

Tags:

apache-spark

I'm trying to submit a Spark app from local machine Terminal to my Cluster. I'm using --master yarn-cluster. I need to run the driver program on my Cluster too, not on the machine I do submit the application i.e my local machine

When I provide the path to application jar which is in my local machine, would spark-submit automatically upload it to my Cluster?

I'm using

    bin/spark-submit 
--class com.my.application.XApp 
--master yarn-cluster --executor-memory 100m 
--num-executors 50 /Users/nish1013/proj1/target/x-service-1.0.0-201512141101-assembly.jar 
1000

and getting error

Diagnostics: java.io.FileNotFoundException: File file:/Users/nish1013/proj1/target/x-service-1.0.0-201512141101- does not exist

In Documentation ,http://spark.apache.org/docs/latest/submitting-applications.html#launching-applications-with-spark-submit

Advanced Dependency Management When using spark-submit, the application jar along with any jars included with the --jars option will be automatically transferred to the cluster.

But seems like it does not !

like image 530
nish1013 Avatar asked Dec 21 '15 08:12

nish1013


1 Answers

I see you are quoting the spark-submit page from Spark Docs but I would spend a lot more time on the Running Spark on YARN page. Bottom-line, look at:

There are two deploy modes that can be used to launch Spark applications on YARN. In yarn-cluster mode, the Spark driver runs inside an application master process which is managed by YARN on the cluster, and the client can go away after initiating the application. In yarn-client mode, the driver runs in the client process, and the application master is only used for requesting resources from YARN.

Further you note, "I need to run the driver program on my Cluster too, not on the machine I do submit the application i.e my local machine"

So I agree with you that you are right to run --master yarn-cluster instead of --master yarn-client

(and one comment notes what might just be a syntax error where you dropped "assembly.jar" but I think this will apply as well...)

Some of the basic assumptions about non-YARN implementations change a lot when YARN is introduced, mostly related to Classpaths and the need to push jars to the workers.

From an email on the Apache Spark User list:

YARN cluster mode. Spark submit does upload your jars to the cluster. In particular, it puts the jars in HDFS so your driver can just read from there. As in other deployments, the executors pull the jars from the driver.

So finally, from the Apache Spark YARN doc:

Ensure that HADOOP_CONF_DIR or YARN_CONF_DIR points to the directory which contains the (client side) configuration files for the Hadoop cluster. These configs are used to write to HDFS and connect to the YARN ResourceManager.


NOTE: I only see you adding a single JAR, if there's a need to add other JARs there's a special note about doing that with YARN:

In yarn-cluster mode, the driver runs on a different machine than the client, so SparkContext.addJar won’t work out of the box with files that are local to the client. To make files on the client available to SparkContext.addJar, include them with the --jars option in the launch command.

That page in the link has some examples.


And of course you downloaded or built the YARN-specific version of Spark.


Background, in a standalone cluster deployment using spark-submit and the option --deploy-mode cluster, yes you do need to make sure every worker node has access to all the dependencies, Spark will not push them to the cluster. This is because in "standalone cluster" mode with Spark as the job manager, you don't know which node the driver will run on! But that doesn't apply to your case.

But if I could, depending on the size of the jars you are uploading, I would still explicitly put the jars on each node, or "globally available" via HDFS, for another reason from the docs:

From Advanced Dependency Management, it seems to present the best of both worlds, but also a great reason for manually pushing your jars out to all nodes:

local: - a URI starting with local:/ is expected to exist as a local file on each worker node. This means that no network IO will be incurred, and works well for large files/JARs that are pushed to each worker, or shared via NFS, GlusterFS, etc.

But I assume that local:/... would change to hdfs:/ ... not sure on that one.

like image 58
JimLohse Avatar answered Oct 11 '22 17:10

JimLohse