my name node in the hadoop cluster turned to bad health because "The role's log directory is non a file system with less than 4GB of its space free. /var/log/hadoop-hdfs (free: 2.4GB (11.12%), capacity:21.7 GB"
I looked into that folder and found that I have 5.5GB of log files called "hdfs-audit.log.0" -"hdfs-audit.log.20" in it. I read these files and I really don't need to save these logs. I am wondering is there a way to permanently delete them and never generate them again? ( I tried to delete them manually, but it seems that they came back after a few hours.)
I also tried to add "log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=WARN,NullAppender"
in my /etc/hadoop/conf/log4j.properties
. But it did not prevent those files from "coming back".
Thank you for your help in advance!
The default directory of Hadoop log file is $HADOOP_HOME/logs (i.e. log directory in Hadoop home directory).
Log in to the Ambari console and navigate to the HDFS > Configs > Advanced section. Expand the Advanced hdfs-log4j section and scroll to the hdfs audit logging section.
First of all, Hadoop is designed for much larger capacity than the ones you mention, which means that 5.5GB logs aren't that much usually. This explains why the default settings is not appropriate in your case.
You can:
navigator.audit_log_max_backup_index
(usually 10) navigator.audit_log_max_file_size
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With