I'm trying to setup a ELK stack on EC2, Ubuntu 14.04 instance. But everything install, and everything is working just fine, except for one thing.
Logstash is not creating an index on Elasticsearch. Whenever I try to access Kibana, it wants me to choose an index, from Elasticsearch.
Logstash is in the ES node, but the index is missing. Here's the message I get:
"Unable to fetch mapping. Do you have indices matching the pattern?"
Am I missing something out? I followed this tutorial: Digital Ocean
EDIT: Here's the screenshot of the error I'm facing: Yet another screenshot:
Logstash does not create index on elasticsearch.
Logstash uses an input plugin to ingest data and an Elasticsearch output plugin to index the data in Elasticsearch, following the Logstash processing pipeline. A Logstash instance has a fixed pipeline constructed at startup, based on the instance's configuration file. Picture credit: Deploying and Scaling Logstash.
Create an index patterneditOpen the main menu, then click to Stack Management > Index Patterns. Click Create index pattern. Start typing in the Index pattern field, and Kibana looks for the names of indices, data streams, and aliases that match your input.
I got identical results on Amazon AMI (Centos/RHEL clone)
In fact exactly as per above… Until I injected some data into Elastic - this creates the first day
index - then Kibana starts working. My simple .conf
is:
input {
stdin {
type => "syslog"
}
}
output {
stdout {codec => rubydebug }
elasticsearch {
host => "localhost"
port => 9200
protocol => http
}
}
then
cat /var/log/messages | logstash -f your.conf
Why stdin
you ask? Well it's not super clear anywhere (also a new Logstash user - found this very unclear) that Logstash will never terminate (e.g. when using the file
plugin) - it's designed to keep watching.
But using stdin - Logstash will run - send data to Elastic (which creates index) then go away.
If I did the same thing above with the file
input plugin, it would never create the index - I don't know why this is.
I finally managed to identify the issue. For some reason, the port 5000 is being accessed by another service, which is not allowing us to accept any incoming connection. So all your have to do is to edit the logstash.conf file, and change the port from 5000
to 5001
or anything of your convenience.
Make sure all of your logstash-forwarders are sending the logs to the new port, and you should be good to go. If you have generated the logstash-forwarder.crt
using the FQDN method, then the logstash-forwarder should be pointing to the same FQDN and not an IP.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With