Hadoop

Hadoop is the most prevalent and open source execution of the MapReduce programing model. Apache Hadoop is a scalable and reliable software framework for parallel and distributed computing. Instead of depending on expensive resources for storage and processing huge volume of data, Hadoop allows big data parallel processing on inexpensive commodity hardware. The following diagram represents the components of the Hadoop architecture:

Apache Hadoop contains five diverse modules, each running on its own JVM, namely:

  1. NameNode.
  2. DataNode.
  3. Secondary NameNode.
  4. JobTracker.
  5. TaskTracker.

NameNode and DataNode belong to the HDFS layer, storing the data and metadata. JobTracker and TaskTracker belongs to the MapReduce layer, keeping track and executing the job. Hadoop clusters contain a master node and numerous slave nodes.

The Hadoop core architecture contains:

  1. Hadoop Distributed File System (HDFS).
  2. MapReduce.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset