When I do this
spark-1.3.0-bin-hadoop2.4% sbin/start-slave.sh
I get this message
failed to launch org.apache.spark.deploy.worker.Worker:
Default is conf/spark-defaults.conf.
Even though I have this:
spark-1.3.0-bin-hadoop2.4% ll conf | grep spark-defaults.conf
-rw-rwxr--+ 1 xxxx.xxxxx ama-unix 507 Apr 29 07:09 spark-defaults.conf
-rw-rwxr--+ 1 xxxx.xxxxx ama-unix 507 Apr 13 12:06 spark-defaults.conf.template
Any idea why?
Thanks
An Executor is dedicated to a specific Spark application and terminated when the application completes. A Spark program normally consists of many Executors, often working in parallel. Typically, a Worker node—which hosts the Executor process—has a finite or fixed number of Executors allocated at any point in time.
The Apache Spark framework uses a master-slave architecture that consists of a driver, which runs as a master node, and many executors that run across as worker nodes in the cluster.
is it possible to use the master node (the PC) as both master and slave in spark cluster? is it possible to have 2 slaves and 1 master node? Yes it is possible, you can configure it as both. There are so many links are available for it.
I'm using spark 1.6.1, and you no longer need to indicate a worker number, so the actual usage is:
start-slave.sh spark://<master>:<port>
First of all, you should make sure you are using the command correctly,
Usage: start-slave.sh <worker#> <spark-master-URL>
where <worker#>
is worker number you want to launch on the machine which you are running this script.<spark-master-URL>
is like spark://localhost:7077
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With