Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Spark Metrics: how to access executor and worker data?

Note: I am using Spark on YARN

I have been trying out the Metric System implemented in Spark. I enabled the ConsoleSink and the CsvSink, and enabled JvmSource for all four instances (driver, master, executor, worker). However I only have driver outputs, and no worker/executor/master data in the console and csv target directory.

After having read this question, I wonder if I do have to ship something to executors when submitting a job.

My submit command: ./bin/spark-submit --class org.apache.spark.examples.SparkPi lib/spark-examples-1.5.0-hadoop2.6.0.jar 10

Bellow is my metric.properties file:

# Enable JmxSink for all instances by class name
*.sink.jmx.class=org.apache.spark.metrics.sink.JmxSink

# Enable ConsoleSink for all instances by class name
*.sink.console.class=org.apache.spark.metrics.sink.ConsoleSink

# Polling period for ConsoleSink
*.sink.console.period=10

*.sink.console.unit=seconds

#######################################
# worker instance overlap polling period
worker.sink.console.period=5

worker.sink.console.unit=seconds
#######################################

# Master instance overlap polling period
master.sink.console.period=15

master.sink.console.unit=seconds

# Enable CsvSink for all instances
*.sink.csv.class=org.apache.spark.metrics.sink.CsvSink
#driver.sink.csv.class=org.apache.spark.metrics.sink.CsvSink

# Polling period for CsvSink
*.sink.csv.period=10

*.sink.csv.unit=seconds

# Polling directory for CsvSink
*.sink.csv.directory=/opt/spark-1.5.0-bin-hadoop2.6/csvSink/

# Worker instance overlap polling period
worker.sink.csv.period=10

worker.sink.csv.unit=second

# Enable Slf4jSink for all instances by class name
#*.sink.slf4j.class=org.apache.spark.metrics.sink.Slf4jSink

# Polling period for Slf4JSink
#*.sink.slf4j.period=1

#*.sink.slf4j.unit=minutes


# Enable jvm source for instance master, worker, driver and executor
master.source.jvm.class=org.apache.spark.metrics.source.JvmSource

worker.source.jvm.class=org.apache.spark.metrics.source.JvmSource

driver.source.jvm.class=org.apache.spark.metrics.source.JvmSource

executor.source.jvm.class=org.apache.spark.metrics.source.JvmSource

And here is a listing of the csv files created by Spark. I am looking forward to access the same data for Spark executors (which are also JVMs).

app-20160812135008-0013.driver.BlockManager.disk.diskSpaceUsed_MB.csv
app-20160812135008-0013.driver.BlockManager.memory.maxMem_MB.csv
app-20160812135008-0013.driver.BlockManager.memory.memUsed_MB.csv
app-20160812135008-0013.driver.BlockManager.memory.remainingMem_MB.csv
app-20160812135008-0013.driver.jvm.heap.committed.csv
app-20160812135008-0013.driver.jvm.heap.init.csv
app-20160812135008-0013.driver.jvm.heap.max.csv
app-20160812135008-0013.driver.jvm.heap.usage.csv
app-20160812135008-0013.driver.jvm.heap.used.csv
app-20160812135008-0013.driver.jvm.non-heap.committed.csv
app-20160812135008-0013.driver.jvm.non-heap.init.csv
app-20160812135008-0013.driver.jvm.non-heap.max.csv
app-20160812135008-0013.driver.jvm.non-heap.usage.csv
app-20160812135008-0013.driver.jvm.non-heap.used.csv
app-20160812135008-0013.driver.jvm.pools.Code-Cache.committed.csv
app-20160812135008-0013.driver.jvm.pools.Code-Cache.init.csv
app-20160812135008-0013.driver.jvm.pools.Code-Cache.max.csv
app-20160812135008-0013.driver.jvm.pools.Code-Cache.usage.csv
app-20160812135008-0013.driver.jvm.pools.Code-Cache.used.csv
app-20160812135008-0013.driver.jvm.pools.Compressed-Class-Space.committed.csv
app-20160812135008-0013.driver.jvm.pools.Compressed-Class-Space.init.csv
app-20160812135008-0013.driver.jvm.pools.Compressed-Class-Space.max.csv
app-20160812135008-0013.driver.jvm.pools.Compressed-Class-Space.usage.csv
app-20160812135008-0013.driver.jvm.pools.Compressed-Class-Space.used.csv
app-20160812135008-0013.driver.jvm.pools.Metaspace.committed.csv
app-20160812135008-0013.driver.jvm.pools.Metaspace.init.csv
app-20160812135008-0013.driver.jvm.pools.Metaspace.max.csv
app-20160812135008-0013.driver.jvm.pools.Metaspace.usage.csv
app-20160812135008-0013.driver.jvm.pools.Metaspace.used.csv
app-20160812135008-0013.driver.jvm.pools.PS-Eden-Space.committed.csv
app-20160812135008-0013.driver.jvm.pools.PS-Eden-Space.init.csv
app-20160812135008-0013.driver.jvm.pools.PS-Eden-Space.max.csv
app-20160812135008-0013.driver.jvm.pools.PS-Eden-Space.usage.csv
app-20160812135008-0013.driver.jvm.pools.PS-Eden-Space.used.csv
app-20160812135008-0013.driver.jvm.pools.PS-Old-Gen.committed.csv
app-20160812135008-0013.driver.jvm.pools.PS-Old-Gen.init.csv
app-20160812135008-0013.driver.jvm.pools.PS-Old-Gen.max.csv
app-20160812135008-0013.driver.jvm.pools.PS-Old-Gen.usage.csv
app-20160812135008-0013.driver.jvm.pools.PS-Old-Gen.used.csv
app-20160812135008-0013.driver.jvm.pools.PS-Survivor-Space.committed.csv
app-20160812135008-0013.driver.jvm.pools.PS-Survivor-Space.init.csv
app-20160812135008-0013.driver.jvm.pools.PS-Survivor-Space.max.csv
app-20160812135008-0013.driver.jvm.pools.PS-Survivor-Space.usage.csv
app-20160812135008-0013.driver.jvm.pools.PS-Survivor-Space.used.csv
app-20160812135008-0013.driver.jvm.PS-MarkSweep.count.csv
app-20160812135008-0013.driver.jvm.PS-MarkSweep.time.csv
app-20160812135008-0013.driver.jvm.PS-Scavenge.count.csv
app-20160812135008-0013.driver.jvm.PS-Scavenge.time.csv
app-20160812135008-0013.driver.jvm.total.committed.csv
app-20160812135008-0013.driver.jvm.total.init.csv
app-20160812135008-0013.driver.jvm.total.max.csv
app-20160812135008-0013.driver.jvm.total.used.csv
DAGScheduler.job.activeJobs.csv
DAGScheduler.job.allJobs.csv
DAGScheduler.messageProcessingTime.csv
DAGScheduler.stage.failedStages.csv
DAGScheduler.stage.runningStages.csv
DAGScheduler.stage.waitingStages.csv
like image 478
Bacon Avatar asked Aug 12 '16 18:08

Bacon


People also ask

Who monitor the executors of a Spark application?

Instana collects all spark application data (including executor data) from the driver JVM. To monitor spark applications the Instana agent needs to be installed on the host on which the Spark driver JVM is running. Please note that there are two ways of submitting spark applications to the cluster manager.

What is worker and executor in Spark?

Spark Architecture Each Worker node consists of one or more Executor(s) who are responsible for running the Task. Executors register themselves with Driver. The Driver has all the information about the Executors at all the time. This working combination of Driver and Workers is known as Spark Application.

What mechanism does Spark communicate with driver and executor?

Spark uses a master/slave architecture. As you can see in the figure, it has one central coordinator (Driver) that communicates with many distributed workers (executors). The driver and each of the executors run in their own Java processes. The driver is the process where the main method runs.


1 Answers

Since you have not given the command you tried, I am assuming that you are not passing metrics.properties. To pass the metrics.propertis the command should be

spark-submit <other parameters> --files metrics.properties 
--conf spark.metrics.conf=metrics.properties

Note metrics.properties has to be specified in --files & --conf, the --files will transfer the metrics.properties file to the executors. Since you can see the output on driver and not on executors I think you are missing the --files option.

like image 197
Abdullah Shaikh Avatar answered Oct 20 '22 04:10

Abdullah Shaikh