I have a Apache Spark 0.9.0 Cluster installed where I am trying to deploy a code which reads a file from HDFS. This piece of code throws a warning and eventually the job fails. Here is the code
/**
* running the code would fail
* with a warning
* Initial job has not accepted any resources; check your cluster UI to ensure that
* workers are registered and have sufficient memory
*/
object Main extends App {
val sconf = new SparkConf()
.setMaster("spark://labscs1:7077")
.setAppName("spark scala")
val sctx = new SparkContext(sconf)
sctx.parallelize(1 to 100).count
}
The below is the WARNING message
Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
how to get rid of this or am I missing some configurations.
You get this when either the number of cores or amount of RAM (per node) you request via setting spark.cores.max
and spark.executor.memory
resp' exceeds what is available. Therefore even if no one else is using the cluster, and you specify you want to use, say 100GB RAM per node, but your nodes can only support 90GB, then you will get this error message.
To be fair the message is vague in this situation, it would be more helpful if it said your exceeding the maximum.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With