Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Unusual Hadoop error - tasks get killed on their own

Tags:

hadoop

When I run my hadoop job I get the following error:

Request received to kill task 'attempt_201202230353_23186_r_000004_0' by user Task has been KILLED_UNCLEAN by the user

The logs appear to be clean. I run 28 reducers, and this doesnt happen for all the reducers. It happens for a selected few and the reducer starts again. I fail to understand this. Also other thing I have noticed is that for a small dataset, I rarely see this error!

like image 380
RFT Avatar asked Feb 29 '12 20:02

RFT


People also ask

What will happen when a running task fails in Hadoop?

If a task is failed, Hadoop will detects failed tasks and reschedules replacements on machines that are healthy. It will terminate the task only if the task fails more than four times which is default setting that can be changes it kill terminate the job. to complete.

How failures are handle in MapReduce job?

MapReduce handles task failures by restarting the failed task and re-computing all input data from scratch, regardless of how much data had already been processed.

What is task execution in Hadoop?

Apache Hadoop does not fix or diagnose slow-running tasks. Instead, it tries to detect when a task is running slower than expected and launches another, an equivalent task as a backup (the backup task is called as speculative task). This process is called speculative execution in Hadoop.

How many tasks are there in MapReduce?

MapReduce jobs have two types of tasks. A Map Task is a single instance of a MapReduce app. These tasks determine which records to process from a data block. The input data is split and analyzed, in parallel, on the assigned compute resources in a Hadoop cluster.


1 Answers

There are three things to try:

Setting a Counter
If Hadoop sees a counter for the job progressing then it won't kill it (see Arockiaraj Durairaj's answer.) This seems to be the most elegant as it could allow you more insight into long running jobs and were the hangups may be.

Longer Task Timeouts
Hadoop jobs timeout after 10 minutes by default. Changing the timeout is somewhat brute force, but could work. Imagine analyzing audio files that are generally 5MB files (songs), but you have a few 50MB files (entire album). Hadoop stores an individual file per block. So if your HDFS block size is 64MB then a 5MB file and a 50 MB file would both require 1 block (64MB) (see here http://blog.cloudera.com/blog/2009/02/the-small-files-problem/, and here Small files and HDFS blocks.) However, the 5MB job would run faster than the 50MB job. Task timeout can be increased in the code (mapred.task.timeout) for the job per the answers to this similar question: How to fix "Task attempt_201104251139_0295_r_000006_0 failed to report status for 600 seconds."

Increase Task Attempts
Configure Hadoop to make more than the 4 default attempts (see Pradeep Gollakota's answer). This is the most brute force method of the three. Hadoop will attempt the job more times, but you could be masking an underlying issue (small servers, large data blocks, etc).

like image 167
topstair Avatar answered Oct 14 '22 18:10

topstair