I ran a mapreduce job on hadoop-2.7.0 but mapreduce job can't be started and I faced with this bellow error:
Job job_1491779488590_0002 failed with state FAILED due to: Application application_1491779488590_0002 failed 2 times due to AM Container for appattempt_1491779488590_0002_000002 exited with exitCode: 1
For more detailed output, check application tracking page:http://erfan:8088/cluster/app/application_1491779488590_0002Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1491779488590_0002_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
17/04/10 13:40:08 INFO mapreduce.Job: Counters: 0
What is the reason of this error and how can I solve this problem?
any help appreciated.
Check logs on Resource Manager:
namenodeip:8088
you got this error:
Now open terminal and check actual problem:
yarn logs -applicationId <APP_ID>
Example: APP_ID = application_1535002188113_0001
In mine case it show permission issues:
so i gave it:
sudo -u hdfs hadoop fs -chmod 775 /user/history or sudo -u hdfs hadoop fs -chmod 777 /user/history
You can see the application logs for the actual issue.
For this you can open the namenode web interface on namenode_ip:50070
Here you can see browse option, click it.
In the submenu, select logs.
Now select the userlogs.
Here you can see the list of applications you ran.
Open the link application_1491779488590_0002
for your above mentioned job inside you can see the logs for each map & reduce task. Open map/reduce job log link.
Inside it you can see: sys, error, stdout files. From these log files you can get the actual error and can fix it.
Or you see these logs in $HADOOP_HOME/logs/userlogs/application_id
path.
Application failed 2 times because if application master failed for some reason, by default it will try to execute the application one more time. The AM retry property can be set to 1 to avoid this.
You can modify the yarn-site.xml file and add this code(ps: %HADOOP_HOME%: is your environement variable) :
<property>
<name>yarn.application.classpath</name>
<value>
%HADOOP_HOME%\etc\hadoop,
%HADOOP_HOME%\share\hadoop\common\*,
%HADOOP_HOME%\share\hadoop\common\lib\*,
%HADOOP_HOME%\share\hadoop\hdfs\*,
%HADOOP_HOME%\share\hadoop\hdfs\lib\*,
%HADOOP_HOME%\share\hadoop\mapreduce\*,
%HADOOP_HOME%\share\hadoop\mapreduce\lib\*,
%HADOOP_HOME%\share\hadoop\yarn\*,
%HADOOP_HOME%\share\hadoop\yarn\lib\*
</value>
</property>
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With