Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do I submit more than one job to Hadoop in a step using the Elastic MapReduce API?

Amazon EMR Documentation to add steps to cluster says that a single Elastic MapReduce step can submit several jobs to Hadoop. However, Amazon EMR Documentation for Step configuration suggests that a single step can accommodate just one execution of hadoop-streaming.jar (that is, HadoopJarStep is a HadoopJarStepConfig rather than an array of HadoopJarStepConfigs).

What is the proper syntax for submitting several jobs to Hadoop in a step?

like image 996
verve Avatar asked Jun 14 '14 10:06

verve


People also ask

Which AWS service can be used to process a large amount of data using the Hadoop framework?

Running Hadoop on AWSAmazon EMR is a managed service that lets you process and analyze large datasets using the latest versions of big data processing frameworks such as Apache Hadoop, Spark, HBase, and Presto on fully customizable clusters.

What action will an EMR cluster configured for step execution take after running a hive program?

When you configure termination after step execution, the cluster starts, runs bootstrap actions, and then runs the steps that you specify. As soon as the last step completes, Amazon EMR terminates the cluster's Amazon EC2 instances.

How many EMR clusters can be run simultaneously?

Q: How many EMR clusters can be run simultaneously? Users may begin as many clusters as they wish. Users are limited to 20 instances across all of the clusters when we first start.

What is elastic MapReduce used for?

Amazon EMR (previously called Amazon Elastic MapReduce) is a managed cluster platform that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark , on AWS to process and analyze vast amounts of data.


1 Answers

Like Amazon EMR Documentation says, you can create a cluster to run some script my_script.sh on the master instance in a step:

aws emr create-cluster --name "Test cluster" --ami-version 3.11 --use-default-roles
    --ec2-attributes KeyName=myKey --instance-type m3.xlarge --instance count 3
    --steps Type=CUSTOM_JAR,Name=CustomJAR,ActionOnFailure=CONTINUE,Jar=s3://elasticmapreduce/libs/script-runner/script-runner.jar,Args=["s3://mybucket/script-path/my_script.sh"]

my_script.sh should look something like this:

#!/usr/bin/env bash

hadoop jar my_first_step.jar [mainClass] args... &
hadoop jar my_second_step.jar [mainClass] args... &
.
.
.
wait

This way, multiple jobs are submitted to Hadoop in the same step---but unfortunately, the EMR interface won't be able to track them. To do this, you should use the Hadoop web interfaces as shown here, or simply ssh to the master instance and explore with mapred job.

like image 110
verve Avatar answered Sep 27 '22 17:09

verve