I have just installed Kibana 7.3 on RHEL 8. The Kibana service is active (running).
I receive Kibana server is not ready yet
message when i curl to http://localhost:5601. My Elasticsearch instance is on another server and it is responding with succes to my requests. I have updated the kibana.yml with that
elasticsearch.hosts:["http://EXTERNAL-IP-ADDRESS-OF-ES:9200"]
i can reach to elasticsearch from the internet with response:
{ "name" : "ip-172-31-21-240.ec2.internal", "cluster_name" : "elasticsearch", "cluster_uuid" : "y4UjlddiQimGRh29TVZoeA", "version" : { "number" : "7.3.1", "build_flavor" : "default", "build_type" : "rpm", "build_hash" : "4749ba6", "build_date" : "2019-08-19T20:19:25.651794Z", "build_snapshot" : false, "lucene_version" : "8.1.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" }
The result of the sudo systemctl status kibana
:
● kibana.service - Kibana Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2019-09-19 12:22:34 UTC; 24min ago Main PID: 4912 (node) Tasks: 21 (limit: 4998) Memory: 368.8M CGroup: /system.slice/kibana.service └─4912 /usr/share/kibana/bin/../node/bin/node --no-warnings --max-http-header-size> Sep 19 12:46:42 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0> Sep 19 12:46:42 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0> Sep 19 12:46:43 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0> Sep 19 12:46:43 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0> Sep 19 12:46:43 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0> Sep 19 12:46:44 ip-172-31-88-39.ec2.internal kibana[4912]: {"type":"log","@timestamp":"2019-0
the result of "sudo journalctl --unit kibana"
Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","elasticsearch","admin"],"pid":1356,"message":"Unable to revive > Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","elasticsearch","admin"],"pid":1356,"message":"No living connect> Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","task_manager"],"pid":1356,"message":"PollError No Living connec> Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","elasticsearch","admin"],"pid":1356,"message":"Unable to revive > Sep 19 06:03:53 ip-172-31-88-39.ec2.internal kibana[1356]: {"type":"log","@timestamp":"2019-09-19T06:03:53Z","tags":["warning","elasticsearch","admin"],"pid":1356,"message":"No living connect>
Do you have any idea where the problem is?
To resolve this, edit your elasticsearch configuration and comment on the entry xpack plugin. Comment the lines below by adding a # sign at the beginning. Save the file and restart the Elasticsearch and Kibana services.
Check the Kibana statusedit To view the Kibana status page, use the status endpoint. For example, localhost:5601/status . For JSON-formatted server status details, use the localhost:5601/api/status API endpoint.
I faced the same issue once when I upgraded Elasticsearch from v6 to v7.
Deleting .kibana*
indexes fixed the problem:
curl --request DELETE 'http://elastic-search-host:9200/.kibana*'
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With