ELK with salesforce
URL:http://localhost:9200/>,
:error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError,
:error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
docker-compose.yml
version: '2'
services:
elasticsearch:
build: elasticsearch/
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
# disable X-Pack
# see https://www.elastic.co/guide/en/x-pack/current/xpack-settings.html
# https://www.elastic.co/guide/en/x-pack/current/installing-xpack.html#xpack-enabling
xpack.security.enabled: "false"
xpack.monitoring.enabled: "false"
xpack.graph.enabled: "false"
xpack.watcher.enabled: "false"
networks:
- elk
logstash:
build: logstash/
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./logstash/pipeline/salesforce.conf:/usr/share/logstash/pipeline/salesforce.conf
- ./logstash/pipeline:/usr/share/logstash/pipeline
ports:
- "5000:5000"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
build: kibana/
volumes:
- ./kibana/config/:/usr/share/kibana/config
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
logstash.yml
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
xpack.monitoring.enabled: false
Pipeline.conf:
Logstash.conf:
input {
tcp {
port => 5000
}
}
output {
elasticsearch {
hosts =>"elasticsearch:9200"
}
}
Salesforce.conf:
input {
salesforce {
client_id => 'XXXXXX'
client_secret => 'XXXXXX'
username => 'XXXXXXX'
password => 'XXXXX'
security_token => 'XXXXX'
sfdc_object_name => 'XXXXXXX'
use_test_sandbox => true
}
}
output {
elasticsearch {
index => "salesforce"
hosts => "localhost"
}
}
Error which I am getting after docker-compose.exe --verbose up
[2017-06-01T15:36:18,518][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash
[36melasticsearch_1 |[0m [2017-06-01T15:36:18,590][WARN ][o.e.d.i.m.TypeParsers ] field [include_in_all] is deprecated, as [_all] is deprecated, and will be disallowed in 6.0, use [copy_to] instead.
[36melasticsearch_1 |[0m [2017-06-01T15:36:18,630][WARN ][o.e.d.i.m.TypeParsers ] field [include_in_all] is deprecated, as [_all] is deprecated, and will be disallowed in 6.0, use [copy_to] instead.
[32mlogstash_1 |[0m [2017-06-01T15:36:18,691][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0x43875c12 URL://elasticsearch:9200>]}
[32mlogstash_1 |[0m [2017-06-01T15:36:18,733][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[32mlogstash_1 |[0m [2017-06-01T15:36:18,736][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[32mlogstash_1 |[0m [2017-06-01T15:36:18,764][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x6300907a URL:http://localhost:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[32mlogstash_1 |[0m [2017-06-01T15:36:18,770][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[32mlogstash_1 |[0m [2017-06-01T15:36:18,788][WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused) {:url=>http://localhost:9200/, :error_message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
ength defined in Hash. This can cause unexpected behavior when accessing the key via as a property. You can still access the key via the #[] method.
[32mlogstash_1 |[0m W, [2017-06-01T15:36:34.472000 #1] WARN -- : You are setting a key that conflicts with a built-in method Restforce::Mash#length defined in Hash. This can cause unexpected behavior when accessing the key via as a property. You can still access the key via the #[] method.
[32mlogstash_1 |[0m W, [2017-06-01T15:36:34.474000 #1] WARN -- : You are setting a key that conflicts with a built-in method Restforce::Mash#length defined in Hash. This can cause unexpected behavior when accessing the key via as a property. You can still access the key via the #[] method.
[32mlogstash_1 |[0m W, [2017-06-01T15:36:34.476000 #1] WARN -- : You are setting a key that conflicts with a built-in method Restforce::Mash#length defined in Hash. This can cause unexpected behavior when accessing the key via as a property. You can still access the key via the #[] method.
[32mlogstash_1 |[0m [2017-06-01T15:36:34,489][INFO ][logstash.pipeline ] Pipeline main started
[32mlogstash_1 |[0m [2017-06-01T15:36:34,667][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[32mlogstash_1 |[0m [2017-06-01T15:36:35,353][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[32mlogstash_1 |[0m [2017-06-01T15:36:35,363][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x6300907a URL:http://localhost:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[36melasticsearch_1 |[0m [2017-06-01T15:36:38,587][WARN ][o.e.d.i.m.TypeParsers ] field [include_in_all] is deprecated, as [_all] is deprecated, and will be disallowed in 6.0, use [copy_to] instead.
[36melasticsearch_1 |[0m [2017-06-01T15:36:38,587][WARN ][o.e.d.i.m.TypeParsers ] field [include_in_all] is deprecated, as [_all] is deprecated, and will be disallowed in 6.0, use [copy_to] instead.
[36melasticsearch_1 |[0m [2017-06-01T15:36:38,734][INFO ][o.e.c.m.MetaDataCreateIndexService] [faeuqd8] [logstash-2017.06.01] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [_default_]
[36melasticsearch_1 |[0m [2017-06-01T15:36:38,809][WARN ][o.e.d.i.m.TypeParsers ] field [include_in_all] is deprecated, as [_all] is deprecated, and will be disallowed in 6.0, use [copy_to] instead.
[36melasticsearch_1 |[0m [2017-06-01T15:36:38,809][WARN ][o.e.d.i.m.TypeParsers ] field [include_in_all] is deprecated, as [_all] is deprecated, and will be disallowed in 6.0, use [copy_to] instead.
[36melasticsearch_1 |[0m [2017-06-01T15:36:39,517][WARN ][o.e.d.i.m.TypeParsers ] field [include_in_all] is deprecated, as [_all] is deprecated, and will be disallowed in 6.0, use [copy_to] instead.
[36melasticsearch_1 |[0m [2017-06-01T15:36:39,528][WARN ][o.e.d.i.m.TypeParsers ] field [include_in_all] is deprecated, as [_all] is deprecated, and will be disallowed in 6.0, use [copy_to] instead.
[36melasticsearch_1 |[0m [2017-06-01T15:36:39,528][WARN ][o.e.d.i.m.TypeParsers ] field [include_in_all] is deprecated, as [_all] is deprecated, and will be disallowed in 6.0, use [copy_to] instead.
[36melasticsearch_1 |[0m [2017-06-01T15:36:39,529][WARN ][o.e.d.i.m.TypeParsers ] field [include_in_all] is deprecated, as [_all] is deprecated, and will be disallowed in 6.0, use [copy_to] instead.
I Can see the Elastic search(9200) and kibana(5601) on the browser
{
"name" : "GIpJMg4",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "AWfRxKkqS_-GTwlf0nRkaA",
"version" : {
"number" : "5.4.0",
"build_hash" : "780f8c4",
"build_date" : "2017-04-28T17:43:27.229Z",
"build_snapshot" : false,
"lucene_version" : "6.5.0"
},
"tagline" : "You Know, for Search"
}
Although I'm far, far from an expert on this, I think I've had a similar issue myself.
The error message points to a connection error trying to connect to localhost. Therefore, try changing the hosts => "localhost"
in your Salesforce.conf file to hosts => "elasticsearch:9200"
. I believe it is currently pointing to the local docker instance localhost, which I guess is your logstash docker instance.
Hope this solves the issue.
PS! You should look into the other warnings as well, as it seems you're using something that may be deprecated in coming versions.
BR, Audun
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With