I have installed spark standalone on a set of clusters. And I tried to launch clusters through the cluster launch script. I have added cluster's IP address into conf/slaves file. The master connects to all slaves through password-less ssh.
After running ./bin/start-slaves.sh
script, I get the following message:
starting org.apache.spark.deploy.worker.Worker, logging to /root/spark-0.8.0-incubating/bin/../logs/spark-root-org.apache.spark.deploy.worker.Worker-1-jbosstest2.out
But the webUI of the master (localhost:8080) is not showing any information about the worker. But when I add localhost entry onto my conf/slaves file the worker info of localhost is shown.
There are no error messages, the message on terminal says the worker is started, but the WebUI is not showing any workers.
I had the same problem. I noticed when I could not telnet master:port from the slaves. In my etc/hosts file (on master) I had a 127.0.0.1 master entry (before my 192.168.0.x master). When I removed the 127.0.0.1 entry from my etc/hosts file I could telnet and when I start-slaves.sh (from the master) my slaves connected
When you run the cluster, check command $jps
in worker nodes, whether its up correctly and check it in the logs with the worker's PID.
or
set the following: run the cluster and check if the ports are up or not with your configured ports
export SPARK_MASTER_WEBUI_PORT=5050
export SPARK_WORKER_WEBUI_PORT=4040
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With