I tried copying files from my local disk to hdfs
. At first it gave SafeModeException
. While searching for solution I read that the problem does not appear if one executes same command again. So I tried again and it didn't gave exception.
hduser@saket:/usr/local/hadoop$ bin/hadoop dfs -copyFromLocal /tmp/gutenberg/ /user/hduser/gutenberg
copyFromLocal: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /user/hduser/gutenberg. Name node is in safe mode.
hduser@saket:/usr/local/hadoop$ bin/hadoop dfs -copyFromLocal /tmp/gutenberg/ /user/hduser/gutenberg
Why is this happening?. Should I keep safemode off by using this code?
hadoop dfs -safemode leave
Safemode for the NameNode is essentially a read-only mode for the Hadoop Distributed File System (HDFS) cluster. NameNode might enter into Safemode for different reasons, such as the following: Available space is less than the amount of space required for the NameNode storage directory.
Once you identify the corrupted blocks you can remove them with hdfs dfs -rm command. But a write operation like delete is not possible when the namenode is in safemode so we need to come out of safemode first.
When the NameNode goes down, the file system goes offline. There is an optional SecondaryNameNode that can be hosted on a separate machine. It only creates checkpoints of the namespace by merging the edits file into the fsimage file and does not provide any real redundancy.
NameNode is in safemode until configured percent of blocks reported to be online by the data nodes. It can be configured by parameter dfs.namenode.safemode.threshold-pct
in the hdfs-site.xml
For small / development clusters, where you have very few blocks - it makes sense to make this parameter lower then its default 0.9999f
value. Otherwise 1 missing block can lead to system to hang in safemode.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With