Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

I'm trying to run the spark examples from Eclipse and getting this generic error: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources.

The version I have is spark-1.6.2-bin-hadoop2.6. I started spark using the ./sbin/start-master.sh command from a shell, and set my sparkConf like this:

SparkConf conf = new SparkConf().setAppName("Simple Application");
conf.setMaster("spark://My-Mac-mini.local:7077");

I'm not bringing any other code here because this error pops up with any of the examples I'm running. The machine is a Mac OSX and I'm pretty sure it has enough resources to run the simplest examples.

What am I missing?

like image 239
Eddy Avatar asked Jun 30 '16 09:06

Eddy


4 Answers

The error indicates that you cluster has insufficient resources for current job.Since you have not started the slaves i.e worker . The cluster won't have any resources to allocate to your job. Starting the slaves will work.

`start-slave.sh <spark://master-ip:7077>`
like image 61
Knight71 Avatar answered Oct 19 '22 18:10

Knight71


I had the same problem, and it was because the workers could not communicate with the driver.

You need to set spark.driver.port (and open said port on your driver), spark.driver.host and spark.driver.bindAddress in your spark-submit from the driver.

like image 30
Maxime Maillot Avatar answered Oct 19 '22 20:10

Maxime Maillot


Solution to your Answer

Reason

  1. Spark Master doesn't have any resources allocated to execute the Job like worker node or slave node.

Fix

  1. You have to start the slave node by connecting with the master node like this /SPARK_HOME/sbin> ./start-slave.sh spark://localhost:7077 (if your master in your local node)

Conclusion

  1. start your master node and also slave node during spark-submit, so that you will get the enough resources allocated to execute the job.

Alternate-way

  1. You need to make necessary changes in spark-env.sh file which is not recommended.
like image 5
Praveen Kumar K S Avatar answered Oct 19 '22 19:10

Praveen Kumar K S


If you try to run your application with IDE, and you have free resources on your workers, you need to do this:

1) Before all, configure workers and master spark nodes.

2) Specify driver(PC) configuration to return calculation value from workers.

SparkConf conf = new SparkConf()
            .setAppName("Test spark")
            .setMaster("spark://ip of your master node:port of your master node")
            .set("spark.blockManager.port", "10025")
            .set("spark.driver.blockManager.port", "10026")
            .set("spark.driver.port", "10027") //make all communication ports static (not necessary if you disabled firewalls, or if your nodes located in local network, otherwise you must open this ports in firewall settings)
            .set("spark.cores.max", "12") 
            .set("spark.executor.memory", "2g")
            .set("spark.driver.host", "ip of your driver (PC)"); //(necessary)
like image 3
dancelikefish Avatar answered Oct 19 '22 19:10

dancelikefish