From my understanding rows are inserted into HBase tables and are getting stored as regions in different region server. So, the region server stores the data
Similarly in terms of Hadoop, data is stored in the data nodes present in the hadoop cluster.
Lets say that i have HBase 0.90.6 configured on top of Hadoop 1.1.1 as follows
2 nodes - master and slave
Based on my statement if table data is stored in the region servers; then what is the role of the data nodes and region servers?
Region serverThe store contains memory store and HFiles. Memstore is just like a cache memory. Anything that is entered into the HBase is stored here initially. Later, the data is transferred and saved in Hfiles as blocks and the memstore is flushed.
Region is data in some range of rows. Say, you want to get a row from HBase table. You request will get to RegionServer which is responsible for region containing your row. RegionServer will either already contain your row in memory (caching), or it needs to read it from HDFS (dataNodes).
DataNodes are the slave nodes in HDFS. The actual data is stored on DataNodes. A functional filesystem has more than one DataNode, with data replicated across them. On startup, a DataNode connects to the NameNode; spinning until that service comes up.
ZooKeeper acts as the bridge across the communication of the HBase architecture. It is responsible for keeping track of all the Region Servers and the regions that are within them. Monitoring which Region Servers and HMaster are active and which have failed is also a part of ZooKeeper's duties.
Data nodes store data. Region server(s) essentially buffer I/O operations; data is permanently stored on HDFS (that is, data nodes). I do not think that putting region server on your 'master' node is a good idea.
Here is a simplified picture of how regions are managed:
You have a cluster running HDFS (NameNode + DataNodes) with replication factor of 3 (each HDFS block is copied into 3 different DataNodes).
You run RegionServers on the same servers as DataNodes. When write request comes to RegionServer it first writes changes into memory and commit log; then at some point it decides that it is time to write changes to permanent storage on HDFS. Here is were data locality comes into play: since you run RegionServer and DataNode on the same server, first HDFS block replica of the file will be written to the same server. Two other replicas will be written to, well, other DataNodes. As a result RegionServer serving the region will almost always have access to local copy of data.
What if RegionServer crashes or RegionMaster decided to reassign region to another RegionServer (to keep cluster balanced)? New RegionServer will be forced to perform remote read first, but as soon as compaction is performed (merging of change log into the data) - new file will be written to HDFS by the new RegionServer, and local copy will be created on the RegionServer (again, because DataNode and RegionServer runs on the same server).
Note: in case of RegionServer crash, regions previously assigned to it will be reassigned to multiple RegionServers.
Good reads:
Tom White, "Hadoop, The Definitive Guide" has good explanation of HDFS architecture. Unfortunately I did not read original Google GFS paper, so I cannot tell if it is easy to follow.
Google BigTable article. HBase is implementation of Google BigTable, and I found that architecture description in this article is the easiest to follow.
Here is nomenclature differences between Google Bigtable and HBase implementation (from Lars George, "HBase, The Definitive Guide"):
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With