Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Spark Driver memory and Application Master memory

Am I understanding the documentation for client mode correctly?

  1. client mode is opposed to cluster mode where the driver runs within the application master?
  2. In client mode the driver and application master are separate processes and therefore spark.driver.memory + spark.yarn.am.memory must be less than the machine's memory?
  3. In client mode is the driver memory is not included in the application master memory setting?
like image 868
user782220 Avatar asked May 18 '18 00:05

user782220


People also ask

What should be the driver memory in Spark?

Determine the memory resources available for the Spark application. Multiply the cluster RAM size by the YARN utilization percentage. Provides 5 GB RAM for available drivers and 50 GB RAM available for worker nodes.

What is application master in Spark?

ApplicationMaster is a standalone application that YARN NodeManager runs inside a YARN resource container and is responsible for the execution of a Spark application on YARN. When created ApplicationMaster class is given a YarnRMClient (which is responsible for registering and unregistering a Spark application).

What is master and driver in Spark?

Master is per cluster, and Driver is per application. For standalone/yarn clusters, Spark currently supports two deploy modes. In client mode, the driver is launched in the same process as the client that submits the application.


1 Answers

client mode is opposed to cluster mode where the driver runs within the application master?

Yes, When Spark application deployed over YARN in

  • Client mode, driver will be running in the machine where application got submitted and the machine has to be available in the network till the application completes.
  • Cluster mode, driver will be running in application master(one per spark application) node and machine submitting the application need not to be in network after submission

Client mode

Client mode

Cluster mode

Cluster mode

If Spark application is submitted with cluster mode on its own resource manager(standalone) then the driver process will be in one of the worker nodes.

References for images and content:

  • StackOverflow - Spark on yarn concept understanding
  • Cloudera Blog - Apache Spark Resource Management and YARN App Models

In client mode the driver and application master are separate processes and therefore spark.driver.memory + spark.yarn.am.memory must be less than the machine's memory?

No, In client mode, driver and AM are separate processes and exists in different machines, so memory need not to be combined but spark.yarn.am.memory + some overhead should be less then YARN container memory(yarn.nodemanager.resource.memory-mb). If it exceeds YARN's Resource Manager will kill the container.

In client mode is the driver memory is not included in the application master memory setting?

Here spark.driver.memory must be less then the available memory in the machine from where the spark application is going to launch.

But, In cluster mode use spark.driver.memory instead of spark.yarn.am.memory.

spark.yarn.am.memory : 512m (default)

Amount of memory to use for the YARN Application Master in client mode, in the same format as JVM memory strings (e.g. 512m, 2g). In cluster mode, use spark.driver.memory instead. Use lower-case suffixes, e.g. k, m, g, t, and p, for kibi-, mebi-, gibi-, tebi-, and pebibytes, respectively.

Check more about these properties here

like image 147
mrsrinivas Avatar answered Sep 18 '22 18:09

mrsrinivas