Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Running a Job on Spark 0.9.0 throws error

I have a Apache Spark 0.9.0 Cluster installed where I am trying to deploy a code which reads a file from HDFS. This piece of code throws a warning and eventually the job fails. Here is the code

/**
 * running the code would fail 
 * with a warning 
 * Initial job has not accepted any resources; check your cluster UI to ensure that 
 * workers are registered and have sufficient memory
 */

object Main extends App {
    val sconf = new SparkConf()
    .setMaster("spark://labscs1:7077")
    .setAppName("spark scala")
    val sctx = new SparkContext(sconf)
    sctx.parallelize(1 to 100).count
}

The below is the WARNING message

Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory

how to get rid of this or am I missing some configurations.

like image 246
prassee Avatar asked Feb 10 '14 12:02

prassee


1 Answers

You get this when either the number of cores or amount of RAM (per node) you request via setting spark.cores.max and spark.executor.memory resp' exceeds what is available. Therefore even if no one else is using the cluster, and you specify you want to use, say 100GB RAM per node, but your nodes can only support 90GB, then you will get this error message.

To be fair the message is vague in this situation, it would be more helpful if it said your exceeding the maximum.

like image 62
samthebest Avatar answered Nov 10 '22 16:11

samthebest