I'm using Fluentd to transfer the data into Elasticsearch.
td-agent.conf
## ElasticSearch
<match es.**>
type elasticsearch
target_index_key @target_index
logstash_format true
flush_interval 5s
</match>
Elasticsearch index :
"logstash-2016.02.24" : {
"aliases" : { },
"mappings" : {
"fluentd" : {
"dynamic" : "strict",
"properties" : {
"@timestamp" : {
"type" : "date",
"format" : "strict_date_optional_time||epoch_millis"
},
"dummy" : {
"type" : "string"
}
}
}
},
Transmit json data :
$ curl -X POST -d 'json={"@target_index": "logstash-2016.02.24","dummy":"test"}' http://localhost:8888/es.test
It should write the data to the given index instead of that It creates new index - logstash-2016.02.25 and it will write data into that. I want to write data to the given index.
Here is the Fluentd elasticsearch github link : https://github.com/uken/fluent-plugin-elasticsearch
Please correct me if I'm missing something.
Fluent::Plugin::Elasticsearch, a plugin for Fluentd Requirements Installation Usage Index templates Configuration host string style raw style port cloud_id cloud_auth emit_error_for_missing_id hosts IPv6 addresses user, password, path, scheme, ssl_verify logstash_format include_timestamp logstash_prefix logstash_prefix_separator logstash_datefor...
NOTE: retry_tag is optional. If you would rather use labels to reroute retries, add a label (e.g '@label @SOMELABEL') to your fluent elasticsearch plugin configuration. Retry records are, by default, submitted for retry to the ROOT label, which means records will flow through your fluentd pipeline from the beginning.
You can set in the elasticsearch-transport how often dead connections from the elasticsearch-transport's pool will be resurrected. This will add the Fluentd tag in the JSON record. For instance, if you have a config like this: By default, all records inserted into Elasticsearch get a random _id.
Default value is application/json which is default Content-Type of Elasticsearch requests. If you will not use template, it recommends to set content_type application/x-ndjson. With this option set to true, Fluentd manifests the index name in the request URL (rather than in the request body).
try this, its due to logstash_format true, please enter your index name in below index_name field (default value is fluentd)
<match es.**>
@type elasticsearch
host localhost
port 9200
index_name <.....your_index_name_here.....>
type_name fluentd
flush_interval 5s
</match>
after run this, please check index created or not by load below url in your browser
http://localhost:9200/_plugin/head/
have a good luck
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With