I know that HDFS stores data using the regular linux file system in the data nodes. My HDFS block size is 128 MB
. Lets say that I have 10 GB
of disk space in my hadoop cluster that means, HDFS initially has 80 blocks
as available storage.
If I create a small file of say 12.8 MB
, #available HDFS blocks will become 79. What happens if I create another small file of 12.8 MB
? Will the #availbale blocks stay at 79 or will it come down to 78? In the former case, HDFS basically recalculates the #available blocks after each block allocation based on the available free disk space so, #available blocks will become 78 only after more than 128 MB of disk space is consumed. Please clarify.
HDFS blocks are large compared to disk blocks, because to minimize the cost of seeks. If we have many smaller size disk blocks, the seek time would be maximum (time spent to seek/look for an information).
HDFS blocks are huge than the disk blocks, and the explanation is to limit the expense of searching. The time or cost to transfer the data from the disk can be made larger than the time to seek for the beginning of the block by simply improving the size of blocks significantly.
The default size of a block in HDFS is 128 MB (Hadoop 2. x) and 64 MB (Hadoop 1. x) which is much larger as compared to the Linux system where the block size is 4KB. The reason of having this huge block size is to minimize the cost of seek and reduce the meta data information generated per block.
The files smaller than the block size do not occupy the full block size. The size of HDFS data blocks is large in order to reduce the cost of seek and network traffic.
The best way to know is to try it, see my results bellow.
But before trying, my guess is that even if you can only allocate 80 full blocks in your configuration, you can allocate more than 80 non-empty files. This is because I think HDFS does not use a full block each time you allocate a non-empty file. Said in another way, HDFS blocks are not a storage allocation unit, but a replication unit. I think the storage allocation unit of HDFS is the unit of the underlying filesystem (if you use ext4 with a block size of 4 KB and you create a 1 KB file in a cluster with replication factor of 3, you consume 3 times 4 KB = 12 KB of hard disk space).
Enough guessing and thinking, let's try it. My lab configuration is as follow:
After starting HDFS, I have the following NameNode summary:
Then I do the following commands:
hadoop fs -mkdir /test
for f in $(seq 1 10); do hadoop fs -copyFromLocal ./1K_file /test/$f; done
With these results:
So the 10 files did not consume 10 times 64 MB (no modification of "DFS Remaining").
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With