Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

terminating a spark step in aws

I want to set up a series of spark steps on an EMR spark cluster, and terminate the current step if it's taking too long. However, when I ssh into the master node and run hadoop jobs -list, the master node seems to believe that there is no jobs running. I don't want to terminate the cluster, because doing so would force me to buy a whole new hour of whatever cluster I'm running. Can anyone please help me terminate a spark-step in EMR without terminating the entire cluster?

like image 810
Daniel Imberman Avatar asked Jan 26 '16 17:01

Daniel Imberman


People also ask

How do you stop a yarn application in EMR?

Another option to stop a running application is to use the YARN command line (this approach does not require port forwarding). You must SSH into the Master node (via bastion) and run the following command: sudo yarn application -kill <application-ID> .

How do I stop my EMR from running?

To cancel a running step, kill either the application ID (for YARN steps) or the process ID (for non-YARN steps). In Amazon EMR versions 5.28. 0 and later, you can use cancel-steps to cancel both pending and running steps.


1 Answers

That's easy:

yarn application -kill [application id]

you can list your running applications with

yarn application -list
like image 73
Erik Schmiegelow Avatar answered Sep 19 '22 04:09

Erik Schmiegelow