Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How does YARN decide to create how many containers? (Why the difference between S3a and HDFS?)

I'm using the current version of Hadoop, and running some TestDFSIO benchmarks (v. 1.8) to compare the cases where the default file system is HDFS versus the default file system is an S3 bucket (used via S3a).

When reading 100 x 1 MB files with default file system as S3a, I observe the number of max containers in YARN Web UI is less than the case for HDFS as default, and S3a is about 4 times slower.

When reading 1000 x 10 KB files with default file system as S3a, I observe the number of max containers in YARN Web UI is at least 10 times less than the case for HDFS as default, and S3a is about 16 times slower. (E.g. 50 seconds of test execution time with HDFS default, versus 16 minutes of test execution time with S3a default.)

The number of Launched Map Tasks is as expected in each case, there's no difference with respect to that. But why is YARN creating at least 10 times less number of containers (e.g. 117 on HDFS versus 8 on S3a)? How does YARN decide to create how many number of containers when the cluster's vcores, RAM, and the job's input splits, and launched map tasks are the same; and only the storage back-end is different?

It might be of course fine to expect a performance difference between HDFS versus Amazon S3 (via S3a) when running the same TestDFSIO jobs, what I'm after is understanding how YARN is deciding the number of max containers it launches during those jobs, where only the default file system is changed, because currently, it is like, when the default file system is S3a, YARN is almost not using 90% of the parallelism (which it normally does when the default file system is HDFS).

The cluster is a 15-node cluster, with 1 NameNode, 1 ResourceManager (YARN), and 13 DataNodes (worker nodes). Each node has 128 GB RAM, and 48-core CPU. This is a dedicated testing cluster: during TestDFSIO test runs, nothing else runs on the cluster.

For HDFS, the dfs.blocksize is 256m, and it uses 4 HDDs (dfs.datanode.data.dir is set to file:///mnt/hadoopData1,file:///mnt/hadoopData2,file:///mnt/hadoopData3,file:///mnt/hadoopData4).

For S3a, fs.s3a.block.size is set to 268435456, that is 256m, same as HDFS default block size.

The Hadoop tmp directory is on an SSD (by setting hadoop.tmp.dir to /mnt/ssd1/tmp in core-site.xml, and also setting mapreduce.cluster.local.dir to /mnt/ssd1/mapred/local in mapred-site.xml)

The performance difference (default HDFS, versus default set to S3a) is summarized below:

TestDFSIO v. 1.8  (READ)    

fs.default.name                # of Files x Size of File   Launched Map Tasks   Max # of containers observed in YARN Web UI Test exec time sec
=============================  =========================   ==================   ===========================================     ==================
hdfs://hadoop1:9000            100  x  1 MB                 100                 117                                               19
hdfs://hadoop1:9000            1000 x 10 KB                1000                 117                                               56
s3a://emre-hadoop-test-bucket  100  x  1 MB                 100                  60                                               78
s3a://emre-hadoop-test-bucket  1000 x 10 KB                1000                   8                                             1012

100 x 1 MB - default FS HDFS versus default FS S3a

1000 x 10 KB - default FS HDFS versus default FS S3a

like image 272
Emre Sevinç Avatar asked Mar 01 '16 11:03

Emre Sevinç


People also ask

What is the difference between HDFS and YARN?

YARN is a generic job scheduling framework and HDFS is a storage framework. YARN in a nut shell has a master(Resource Manager) and workers(Node manager), The resource manager creates containers on workers to execute MapReduce jobs, spark jobs etc.

How many containers does YARN allocate to a MapReduce application?

MapReduce requests three different kinds of containers from YARN: the application master container, map containers, and reduce containers. For each container type, there is a corresponding set of properties that can be used to set the resources requested.

How do HDFS and YARN work together?

YARN allows the data stored in HDFS (Hadoop Distributed File System) to be processed and run by various data processing engines such as batch processing, stream processing, interactive processing, graph processing and many more. Thus the efficiency of the system is increased with the use of YARN.

What is container in YARN Hadoop?

Container represents an allocated resource in the cluster. The ResourceManager is the sole authority to allocate any Container to applications. The allocated Container is always on a single node and has a unique ContainerId . It has a specific amount of Resource allocated.


1 Answers

Long story short, one of the important criteria YARN uses to decide how many containers to create is based on data locality. When using a non-HDFS file system, such as S3a to connect to Amazon S3, or another S3-compatible object store, it is the responsibility of the file system to give information about the data locality because in such a case, none of the data is local to a node, each node needs to retrieve the data from the network, or, from another perspective, each node has the same data locality.

The previous paragraph explains the container creation behavior I've observed when running Hadoop MapReduce jobs against Amazon S3, using S3a file system. In order to fix the issue, I've started to work on a patch, and the development will be tracked via HADOOP-12878.

Also see the following:

  • SO question:hadoop/yarn and task parallelization on non-hdfs filesystems
  • Mail list thread: hadoop/yarn and task parallelization on non-hdfs filesystems
like image 164
Emre Sevinç Avatar answered Sep 22 '22 13:09

Emre Sevinç