Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Job and Task Scheduling In Hadoop

I am little confused about the terms "Job scheduling" and "Task scheduling" in Hadoop when I was reading about delayed fair scheduling in this slide.

Please correct me if I am wrong in my following assumptions:

  1. Default scheduler, Capacity scheduler and Fair schedulers are only valid at job level when multiple jobs are scheduled by the user. They don't play any role if there is only single job in the system. These scheduling algorithms form basis for "job scheduling"

  2. Each job can have multiple map and reduce tasks and how are they assigned to each machine? How are tasks scheduled for a single job? What is the basis for "task scheduling"?

like image 964
GoT Avatar asked Sep 29 '13 18:09

GoT


1 Answers

In case of fair scheduler, when there is a single job running, that job uses the entire cluster. When other jobs are submitted, tasks slots that free up are assigned to the new jobs, so that each job gets roughly the same amount of CPU time.

Unlike the default Hadoop scheduler, which forms a queue of jobs, this lets short jobs finish in reasonable time while not starving long jobs. It is also an easy way to share a cluster between multiple of users. Fair sharing can also work with job priorities - the priorities are used as weights to determine the fraction of total compute time that each job gets.

The CapacityScheduler is designed to allow sharing a large cluster while giving each organization a minimum capacity guarantee. The central idea is that the available resources in the Hadoop Map-Reduce cluster are partitioned among multiple organizations who collectively fund the cluster based on computing needs. There is an added benefit that an organization can access any excess capacity not being used by others. This provides elasticity for the organizations in a cost-effective manner.

like image 175
SSaikia_JtheRocker Avatar answered Sep 19 '22 21:09

SSaikia_JtheRocker