I have a container set up to run elasticsearch. The service starts but I can't connect to it via curl or the browser.
RUN \
cd /tmp && \
wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch- 1.3.2.tar.gz && \
tar xvzf elasticsearch-1.3.2.tar.gz && \
rm -f elasticsearch-1.3.2.tar.gz && \
mv /tmp/elasticsearch-1.3.2 /elasticsearch
# Define mountable directories.
VOLUME ["/data"]
# Define default command.
CMD ["/elasticsearch/bin/elasticsearch"]
EXPOSE 9200
EXPOSE 9300
Connecting to http://localhost:9200
yields nothing. The docker ps shows ports;
0.0.0.0:49179->9200/tcp, 0.0.0.0:49180->9300/tcp
...
net::ERR_ADDRESS_UNREACHABLE
Am I missing some config value? THANKS!
[Update] I also tried the -p in the run command
docker run -i -p 9200:9200 -p 9300:9300 -t --rm -P team1/image1
Obtaining Elasticsearch for Docker is as simple as issuing a docker pull command against the Elastic Docker registry. To start a single-node Elasticsearch cluster for development or testing, specify single-node discovery to bypass the bootstrap checks: To get a three-node Elasticsearch cluster up and running in Docker, you can use Docker Compose:
The vm.max_map_count setting must be set via docker-machine: The vm.max_map_count setting must be set in the docker-desktop container: By default, Elasticsearch runs inside the container as user elasticsearch using uid:gid 1000:0. One exception is Openshift , which runs containers using an arbitrarily assigned user ID.
When using docker run, you can specify: The image exposes TCP ports 9200 and 9300. For production clusters, randomizing the published ports with --publish-all is recommended, unless you are pinning one container per host.
Elasticsearch is also available as Docker images. The images use centos:8 as the base image. A list of all published Docker images and tags is available at www.docker.elastic.co. The source files are in Github. This package contains both free and subscription features. Start a 30-day trial to try out all of the features.
I had an issue with port forwarding when running Elasticsearch in a docker container. I solved the problem by manually specifying which interfaces to bind Elasticsearch server docker run --rm -p 9200:9200 -p 9300:9300 --name=es elasticsearch:latest -Des.network.host=0.0.0.0
.
The binding part is the -Des.network.host=0.0.0.0
at the end. I wrote a blog post detailing this at https://mad.is/2016/09/running-elasticsearch-in-docker-container/
If you are running docker on OSX, note that the host is really the VirtualBox instance that was installed when you initialized boot2docker. So in this situation, instead of using:
curl http://localhost:9200
find the IP of the virtual box instance using, which I'll denote as VM_IP:
boot2docker ip
then try:
curl http://<VM_IP>:9200
I tested your Dockerfile, but it just works.
FROM dockerfile/java
RUN \
cd /tmp && \
wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.3.2.tar.gz && \
tar xvzf elasticsearch-1.3.2.tar.gz && \
rm -f elasticsearch-1.3.2.tar.gz && \
mv /tmp/elasticsearch-1.3.2 /elasticsearch
# Define mountable directories.
VOLUME ["/data"]
# Define default command.
CMD ["/elasticsearch/bin/elasticsearch"]
EXPOSE 9200
EXPOSE 9300
I try to build this Dockerfile and run it.
$ docker build -t 25312935 .
$ docker run -t -p 9200:9200 -p 9300:9300 --rm 25312935
[2014-08-15 04:41:08,349][INFO ][node ] [Black Crow] version[1.3.2], pid[1], build[dee175d/2014-08-13T14:29:30Z]
[2014-08-15 04:41:08,349][INFO ][node ] [Black Crow] initializing ...
[2014-08-15 04:41:08,353][INFO ][plugins ] [Black Crow] loaded [], sites []
[2014-08-15 04:41:10,444][INFO ][node ] [Black Crow] initialized
[2014-08-15 04:41:10,444][INFO ][node ] [Black Crow] starting ...
[2014-08-15 04:41:10,547][INFO ][transport ] [Black Crow] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/172.17.0.72:9300]}
[2014-08-15 04:41:10,560][INFO ][discovery ] [Black Crow] elasticsearch/0mpczYoYSZCiAmbkxcsfpg
[2014-08-15 04:41:13,601][INFO ][cluster.service ] [Black Crow] new_master [Black Crow][0mpczYoYSZCiAmbkxcsfpg][eeb3396b1ecc][inet[/172.17.0.72:9300]], reason: zen-disco-join (elected_as_master)
[2014-08-15 04:41:13,615][INFO ][http ] [Black Crow] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/172.17.0.72:9200]}
[2014-08-15 04:41:13,615][INFO ][node ] [Black Crow] started
[2014-08-15 04:41:13,634][INFO ][gateway ] [Black Crow] recovered [0] indices into cluster_state
As you can see below, request 127.0.0.1:9200
returns json response.
$ curl 127.0.0.1:9200
{
"status" : 200,
"name" : "Black Crow",
"version" : {
"number" : "1.3.2",
},
"tagline" : "You Know, for Search"
}
Check your -p
option. It means publising container's port to host. If you doesn't write explicitly host's port, docker assign random port like below.
$ docker run -t -p 9200 -p 9300 --rm 25312935
$ docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1aa4c2c84d04 25312935:latest /elasticsearch/bin/e 15 seconds ago Up 15 seconds 0.0.0.0:49153->9200/tcp, 0.0.0.0:49154->9300/tcp sad_shockley
0.0.0.0:49153->9200/tcp
means that you can access container's 9200 port through host's 49153 port.
$ curl 127.0.0.1:49153
{
"status" : 200,
"name" : "Golem",
"version" : {
"number" : "1.3.2",
},
"tagline" : "You Know, for Search"
}
So if you want to use hosts 9200 port, explicitly write the host port like
-p 9200:9200or
-p 0.0.0.0:9200:9200`
$ docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
eeb3396b1ecc 25312935:latest /elasticsearch/bin/e 59 seconds ago Up 58 seconds 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp high_elion
If this still doesn't work, try to --net=host
option. You can use the host network stack inside the container by using this option.
$ docker run -t --net=host --rm 25312935
$ curl 127.0.0.1:9200
{
"status" : 200,
"name" : "Black Crow",
"version" : {
"number" : "1.3.2",
},
"tagline" : "You Know, for Search"
}
If both don't work, I think that you need to check your other network configuration.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With