Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Job aborted due to stage failure: ShuffleMapStage 20 (repartition at data_prep.scala:87) has failed the maximum allowable number of times: 4

I am submitting Spark job with following specification:(same program has been used to run different size of data range from 50GB to 400GB)

/usr/hdp/2.6.0.3-8/spark2/bin/spark-submit 
 --master yarn 
 --deploy-mode cluster 
 --driver-memory 5G 
 --executor-memory 10G 
 --num-executors 60
 --conf spark.yarn.executor.memoryOverhead=4096 
 --conf spark.shuffle.registration.timeout==1500 
 --executor-cores 3 
 --class classname /home//target/scala-2.11/test_2.11-0.13.5.jar

I have tried reparations the data while reading and also applied reparation before do any count by Key operation on RDD:

val rdd1 = rdd.map(x=>(x._2._2,x._2._1)).distinct.repartition(300)
val receiver_count=rdd1.map(x=>x._2).distinct.count

User class threw exception:

org.apache.spark.SparkException: Job aborted due to stage failure: ShuffleMapStage 20 (repartition at data_prep.scala:87) has failed the maximum allowable number of times: 4. Most recent failure reason: org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle 9

like image 215
manohar Avatar asked Jul 04 '19 03:07

manohar


1 Answers

In my case I gave my executors a little more memory and the job went through fine. You should definitely look at what stage your job is failing at and accordingly determine if increasing/decreasing the executors' memory would help.

like image 71
rishab137 Avatar answered Oct 06 '22 01:10

rishab137