Is there a way to stop a Spark worker through the terminal? I'm aware of the scripts: start-all.sh, stop-all.sh, stop-workers.sh, etc. However, everytime I run start-all.sh there seems to be residual workers from a previous Spark cluster instance that are also spawned. I know this because the Worker Id contains the date and timestamp of when the worker was created.
So when I run start-all.sh today, I see the same 7 or so workers that were created at the beginning of April.
Is there a way to kill these earlier workers? Or perhaps a way to grep for their process names?
Spark master and slaves can be stopped using the following scripts: $SPARK_HOME/sbin/stop-master.sh: This script is used to stop Spark Master nodes. $SPARK_HOME/sbin/stop-slaves.sh : This script is used to stop all slave nodes together. This should be executed on the Spark master node.
An Executor is dedicated to a specific Spark application and terminated when the application completes. A Spark program normally consists of many Executors, often working in parallel. Typically, a Worker node—which hosts the Executor process—has a finite or fixed number of Executors allocated at any point in time.
This has happened to me in the past and what I usually do is:
1) Find the process id:
ps aux | grep spark
2) And kill it:
sudo kill pid1
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With