Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

ElasticSearch: How to configure logging.yml

I've installed ElasticSearch 1.4.1 on Ubuntu machine.

Its logs are in the default location:

/var/log/elasticsearch/

When I run the ElasticSearch, after a while the log files grows and it become difficult to handle them.

The logs are already in a rotating file mode. Meaning, every day there is a new log file.

I want to configure the logs to be zipped (e.g. = file.log.zip ,currently they're not).

I also know that I can do it in the logging.yml file but I don't know how to do it.

Can someone help me with that?

like image 978
raven99 Avatar asked Feb 24 '15 12:02

raven99


People also ask

How do I add logs to Elasticsearch?

You need to install Filebeat first which collects logs from all the web servers. After that need to pass logs from Filebeat -> Logstash. In Logstash you can format and drop unwanted logs based on Grok pattern. Forward logs from Logstash -> Elasticsearch for storing and indexing.

Where is config Elasticsearch yml?

Configuration file You can find elasticsearch. yml in /usr/share/elasticsearch/config/elasticsearch. yml (Docker) or /etc/elasticsearch/elasticsearch.


1 Answers

After some digging (also in the ElasticSearch source code) I've found an answer. :)

Usually, when you're using a software like ElasticSearch and want it to be in production use, you're thinking that a basic function like logging is all taking care of. Sadly, in the current version (1.4 and prior) this is not the case.

ElasticSearch is using log4j as its logging mechanism.

log4j is doing the following:

  1. Add logs to a log file
  2. Rotate the log file when it is bigger then CONFIGURED_VALUE

If there is a heavy use in ElasticSearch, the logs are piling up and eventually filling your entire storage.

The answer to that is adding the following:

  1. Zip the old rotated log file
  2. Make sure that the total log files will not be over than CONFIGURED_VALUE

The answer to that is that there is another API that called log4j-extensions and it extends log4j capabilities and zip the rotating log files.

Sadly, it will be available only on the next version 1.5 or in the master branch for the crazy people among us who compile ElasticSearch from source code. (see log4j rollingPolicy support).

BUT, there is a simpler solution:

USE LOGROTATE TO HANDLE YOUR LOGS.

If the ElasticSearch is running on a Linux OS, you can use logrotate daemon. (see what is log rotate Understanding logrotate utility)

You need to do the following:

  1. Reset the log configuration
  2. Create a new file for handling log files

Reset the log configuration

sudo vi /etc/elasticsearch/logging.yml

Change the following in logging.yml

# Mark the dailyRollingFile Appender
#  file:
#    type: dailyRollingFile
#    file: ${path.logs}/${cluster.name}.log
#    datePattern: "'.'yyyy-MM-dd"
#    layout:
#      type: pattern
#      conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"

# Add the file Appender
file:
  type: file
  file: ${path.logs}/${cluster.name}.log
  datePattern: "'.'yyyy-MM-dd"
  layout:
    type: pattern
    conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"

Create a new file for handling log files

sudo vi /etc/logrotate.d/elasticsearch

Add the following to the logrotate file:

/var/log/elasticsearch/*.log {
    daily
    rotate 100
    size 50M
    copytruncate
    compress
    delaycompress
    missingok
    notifempty
    create 644 elasticsearch elasticsearch
}

After that restart ElasticSearch

sudo service elasticsearch stop
sudo service elasticsearch start

This way you'll limit the total log files storage into 5GB (rotate 100 * 50M).

Obviously, you can configure it as you see fit.

like image 72
raven99 Avatar answered Oct 10 '22 12:10

raven99