Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Make YARN clean up appcache before retry

The situation is the following:

  1. A YARN application is started. It gets scheduled.
  2. It writes a lot to its appcache directory.
  3. The application fails.
  4. YARN restarts it. It goes pending, because there is not enough disk space anywhere to schedule it. The disks are filled up by the appcache from the failed run.

If I manually intervene and kill the application, the disk space is cleaned up. Now I can manually restart the application and it's fine.

I wish I could tell the automated retry to clean up the disk. Alternatively I suppose it could count that used disk as part of the new allocation, since it belongs to the application anyway.

I'll happily take any solution you can offer. I don't know much about YARN. It's an Apache Spark application started with spark-submit in yarn-client mode. The files that fill up the disk are the shuffle spill files.

like image 699
Daniel Darabos Avatar asked Aug 18 '15 21:08

Daniel Darabos


1 Answers

So here's what happens:

  1. When you submit yarn application it creates a private local resource folder (appcache directory).
  2. Inside this directory spark block manager creates directory for storing block data. As mentioned:

local directories and won't be deleted on JVM exit when using the external shuffle service.

  1. This directory can be cleaned via:

    • Shutdown hook. This what's happen when you kill the application.
    • Yarn DeletionService. It should be done automatically on application finish. Make sure yarn.nodemanager.delete.debug-delay-sec=0. Otherwise there is some unresolved yarn bug
like image 54
prudenko Avatar answered Nov 07 '22 08:11

prudenko