Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can I debug why Fluentd is not sending data to Elasticsearch?

There are 0 error messages when bringing up the Fluentd docker container, so it makes it hard to debug.

curl http://elasticsearch:9200/_cat/indices from the fluentd-container shows indices, but however doesn't show the fluentd-index.

docker logs 7b
2018-06-29 13:56:41 +0000 [info]: reading config file path="/fluentd/etc/fluent.conf"
2018-06-29 13:56:41 +0000 [info]: starting fluentd-0.12.19
2018-06-29 13:56:41 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '1.4.0'
2018-06-29 13:56:41 +0000 [info]: gem 'fluent-plugin-rename-key' version '0.1.3'
2018-06-29 13:56:41 +0000 [info]: gem 'fluentd' version '0.12.19'
2018-06-29 13:56:41 +0000 [info]: gem 'fluentd' version '0.10.61'
2018-06-29 13:56:41 +0000 [info]: adding filter pattern="**" type="record_transformer"
2018-06-29 13:56:41 +0000 [info]: adding match pattern="docker.*" type="rename_key"
2018-06-29 13:56:41 +0000 [info]: Added rename key rule: rename_rule1 {:key_regexp=>/^log$/, :new_key=>"message"}
2018-06-29 13:56:41 +0000 [info]: adding match pattern="**" type="elasticsearch"
2018-06-29 13:56:41 +0000 [info]: adding source type="forward"
2018-06-29 13:56:41 +0000 [info]: adding source type="monitor_agent"
2018-06-29 13:56:41 +0000 [info]: using configuration file: <ROOT>
  <source>
    @type forward
  </source>
  <source>
    @type monitor_agent
    bind 0.0.0.0
    port 24220
  </source>
  <filter **>
    type record_transformer
    <record>
      node /
      role app
      environment dev
      tenant xxx
      tag ${tag}
    </record>
  </filter>
  <match docker.*>
    type rename_key
    rename_rule1 ^log$ message
    append_tag message
  </match>
  <match **>
    type elasticsearch
    host elasticsearch
    port 9200
    index_name fluentd
    type_name fluentd
    include_tag_key true
    logstash_format true
  </match>
</ROOT>
2018-06-29 13:56:41 +0000 [info]: listening fluent socket on 0.0.0.0:24224
...
2018-06-29 14:16:38 +0000 [info]: listening fluent socket on 0.0.0.0:24224
2018-06-29 14:20:56 +0000 [warn]: incoming chunk is broken: source="host: 172.18.42.1, addr: 172.18.42.1, port: 48704" msg=49
2018-06-29 14:20:56 +0000 [warn]: incoming chunk is broken: source="host: 172.18.42.1, addr: 172.18.42.1, port: 48704" msg=50
2018-06-29 14:20:56 +0000 [warn]: incoming chunk is broken: source="host: 172.18.42.1, addr: 172.18.42.1, port: 48704" msg=51
... many repeats
2018-07-01 06:21:52 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2018-07-01 08:39:07 +0000 error_class="MultiJson::ParseError" error="Yajl::ParseError" plugin_id="object:2ac58fef2200"
  2018-07-01 06:21:52 +0000 [warn]: suppressed same stacktrace
2018-07-01 08:39:07 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2018-07-01 13:02:17 +0000 error_class="MultiJson::ParseError" error="Yajl::ParseError" plugin_id="object:2ac58fef2200"
  2018-07-01 08:39:07 +0000 [warn]: suppressed same stacktrace
2018-07-01 13:02:17 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2018-07-01 21:04:48 +0000 error_class="MultiJson::ParseError" error="Yajl::ParseError" plugin_id="object:2ac58fef2200"
  2018-07-01 13:02:17 +0000 [warn]: suppressed same stacktrace
2018-07-01 21:04:48 +0000 [warn]: failed to flush the buffer. error_class="MultiJson::ParseError" error="Yajl::ParseError" plugin_id="object:2ac58fef2200"
2018-07-01 21:04:48 +0000 [warn]: retry count exceededs limit.
  2018-07-01 21:04:48 +0000 [warn]: suppressed same stacktrace
2018-07-01 21:04:48 +0000 [error]: throwing away old logs.

I am able to successfully insert data in a test-index in ElasticSearch by curling. How do I troubleshoot where fluentd fails?

like image 213
Dennis Avatar asked Jul 02 '18 09:07

Dennis


2 Answers

I am unable to comment so adding couple of observations here.

Documentation says to use @type elasticsearch. Also if both elastic and fluentd are running as docker containers, please make sure to run them with proper network so they can talk to each other(try IPs first maybe).

Also, what is your Dockerfile looks like so we can pass verbosity to fluentd command?.

like image 131
Imran Avatar answered Oct 02 '22 03:10

Imran


I successfully used this configuration for fluentd+elastisearch:

<source>
  @type      forward
  @label     @mainstream
  bind       0.0.0.0
  port       24224
</source>

<label @mainstream>
  <match **>
    @type copy

    <store>
      @type               elasticsearch
      host                elasticsearch
      port                9200
      logstash_format     true
      logstash_prefix     fluentd
      logstash_dateformat %Y%m%d
      include_tag_key     true
      type_name           access_log
      tag_key             @log_name
      <buffer>
        flush_mode            interval
        flush_interval        1s
        retry_type            exponential_backoff
        flush_thread_count    2
        retry_forever         true
        retry_max_interval    30
        chunk_limit_size      2M
        queue_limit_length    8
        overflow_action       block
      </buffer>
    </store>

  </match>
</label>

For debugging you could use tcpdump:

sudo tcpdump -i eth0 tcp port 24224 -X -s 0 -nn

**note: removed the leading slash form the first source tag

like image 28
Nicola Ben Avatar answered Oct 02 '22 03:10

Nicola Ben