Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Can somebody give a high-level, simple explanation to a beginner about how Hadoop works?

I know how memcached works. How does Hadoop work?

like image 509
TIMEX Avatar asked Mar 23 '10 02:03

TIMEX


1 Answers

Hadoop consists of a number of components which are each subprojects of the Apache Hadoop project. Two of the main ones are the Hadoop Distributed File System (HDFS) and the MapReduce framework.

The idea is that you can network together a number of of-the-shelf computers to create a cluster. The HDFS runs on the cluster. As you add data to the cluster it is split into large chunks/blocks (generally 64MB) and distributed around the cluster. HDFS allows data to be replicated to allow recovery from hardware failures. It almost expects hardware failures since it is meant to work with standard hardware. HDFS is based on the Google paper about their distributed file system GFS.

The Hadoop MapReduce framework runs over the data stored on the HDFS. MapReduce 'jobs' aim to provides a key/value based processing ability in a highly paralleled fashion. Because the data is distributed over the cluster a MapReduce job can be split-up to run many parallel processes over the data stored on the cluster. The Map parts of MapReduce only run on the data they can see, ie the data blocks on the particular machine its running on. The Reduce brings together the output from the Maps.

The result is a system that provides a highly-paralleled batch processing capability. The system scales well, since you just need to add more hardware to increase its storage capability or decrease the time a MapReduce job takes to run.

Some links:

  • Word Count introduction to Hadoop MapReduce
  • The Google File System
  • MapReduce: Simplified Data Processing on large clusters
like image 74
Binary Nerd Avatar answered Oct 12 '22 23:10

Binary Nerd