Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Deleting files from HDFS does not free up disk space

After upgrading our small Cloudera Hadoop cluster to CDH 5, deleting files no longer frees up available storage space. Even though we delete more data than we add, the file system keeps filling up.

Cluster setup

We are running a four node cluster on physical, dedicated hardware, with some 110 TB of total storage capacity. On april 3, we upgraded the CDH software from the 5.0.0-beta2 version to version 5.0.0-1.

We previously used to put log data on hdfs in plain text format at a rate of approximately 700 GB/day. On april 1 we switched to importing data as .gz files instead, which lowered the daily ingestion rate to about 130 GB.

Since we only want to retain data up to a certain age, there is a nightly job to delete obsolete files. The result of this used to be clearly visible in the hdfs capacity monitoring chart, but can no longer be seen.

Sine we import about 570 GB less data than we delete every day, one would expect the capacity used to go down. But instead our reported hdfs use has been constantly growing since the cluster software was upgraded.

Problem description

Running hdfs hadoop fs -du -h / gives the following output:

0       /system
1.3 T   /tmp
24.3 T  /user

This is consistent with what we expect to see, given the size of the imported files. Using a replication factor of 3, this should correspond to a physical disk usage of about 76.8 TB.

When instead running hdfs dfsadmin -report the result is different:

Configured Capacity: 125179101388800 (113.85 TB)
Present Capacity: 119134820995005 (108.35 TB)
DFS Remaining: 10020134191104 (9.11 TB)
DFS Used: 109114686803901 (99.24 TB)
DFS Used%: 91.59%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

Here, DFS Used is reported as 99.24 TB, which is what we see in the monitoring chart. Where did all that data come from?

What we have tried

The first thing we suspected was that the automatic emptying of trash was not working, but that does not seem to be the case. Only the most recently deleted files are in trash, and they automatically disappear after a day.

Our issue is seem very similar to what would happen if a hdfs metadata upgrade was performed but not finalized. I don't think this is needed when upgrading between these versions, but have still performed both steps 'just in case'.

On the DN storage volumes in the local file system, there is a lot of data under `previous/finalized'. I have too little knowledge of the implementation details of hdsf to know if this is significant, but it could indicate something with the finalization is out of synch.

We will soon run out of disk space on the cluster, so any help is much appreciated.

like image 814
knutn Avatar asked Apr 14 '14 10:04

knutn


People also ask

How do I clear HDFS disk usage?

Simply follow this path; from the Ambari Dashboard, click HDFS -> Configs -> Advanced -> Advanced core-site. Then set the 'fs. trash. interval' to 0 to disable.

What happened if we delete any file from HDFS?

Actually any file stored in hdfs is split in blocks (chunks of data) and each block is replicated 3 times by default. When you delete a file you remove the metadata pointing to the blocks that is stored in Namenode. Blocks are deleted when there is no reference to them in the Namenode metadata.

How do I clean up my HDFS files?

You will find rm command in your Hadoop fs command. This command is similar to the Linux rm command, and it is used for removing a file from the HDFS file system. The command –rmr can be used to delete files recursively.


1 Answers

I found a similar issue on our cluster, which stemmed probably from a failed upgrade.

First make sure to finalize the upgrade on the namenode

hdfs dfsadmin -finalizeUpgrade

What I found was that the datanodes for some reason did not finalize their directories at all.

On your datanode, you should see the following directory layout

/[mountpoint}/dfs/dn/current/{blockpool}/current

And

/[mountpoint}/dfs/dn/current/{blockpool}/previous

If you have not finalized the previous directory contains all data that was created before the update. If you delete anything it will not remove it - hence your storage never reduces.

Actually the most simplest solution was sufficient

Restart the namenode

Watch the log of the datanode, you should see something like this

INFO org.apache.hadoop.hdfs.server.common.Storage: Finalizing upgrade for storage directory

Afterwards the directories will be cleared in the background and the storage reclaimed.

like image 114
Joey Avatar answered Oct 03 '22 06:10

Joey