Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How does the HDFS Client knows the block size while writing?

The HDFS Client is outside the HDFS Cluster. When the HDFS Client write the file to hadoop the HDFS clients split the files into blocks and then it will write the block to datanode.

The question here is how the HDFS Client knows the Blocksize ? Block size is configured in the Name node and the HDFS Client has no idea about the block size then how it will split the file into blocks ?

like image 606
Surendiran Balasubramanian Avatar asked Oct 18 '22 23:10

Surendiran Balasubramanian


1 Answers

HDFS is designed in a way where the block size for a particular file is part of the MetaData.

Let's just check what does this mean?

The client can tell the NameNode that it will put data to HDFS with a particular block size. The client has its own hdfs-site.xml that can contain this value, and can specify it on a per-request basis as well using the -Ddfs.blocksize parameter.

If the client configuration does not define this parameter, then it defaults to the org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_SIZE_DEFAULT value which is 128MB.

NameNode can throw an error for the client if it specifies a blocksize that is smaller then dfs.namenode.fs-limits.min-block-size (1MB by default).

There is nothing magical in this, NameNode does know nothing about the data and let the client to decide the optimal splitting, as well as to define the replication factor for blocks of a file.

like image 55
pifta Avatar answered Oct 21 '22 17:10

pifta