Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to use the official docker elasticsearch container?

I have the following Dockerfile:

FROM docker.elastic.co/elasticsearch/elasticsearch:5.4.0
RUN elasticsearch
EXPOSE 80

I think the 3rd line is never reached.

When I try to access the dockercontainer from my local machine through: 172.17.0.2:9300

I get nothing, what am I missing? I want to access elasticsearch from the local host machine.

like image 531
CommonSenseCode Avatar asked May 16 '17 13:05

CommonSenseCode


People also ask

What is Elasticsearch docker?

Elasticsearch is a distributed, RESTful search and analytics engine capable of solving a growing number of use cases. As the heart of the Elastic Stack, it centrally stores your data so you can discover the expected and uncover the unexpected.

Where is the Elasticsearch Yml file in docker?

It is located in the /usr/share/elasticsearch/config .

How do I create an Elasticsearch cluster in Docker?

Use Docker Compose to create an Elasticsearch cluster With docker-compose we can declare all the containers that make up an application in a YAML format. For each container we can also configure the environment variables that should be set, any volumes that are required, and define a network to allow the services to communicate with each other.

How to run Elasticsearch and Kibana in Docker containers?

Use auth tokens instead or an older version. After both yml files are ready, open up a terminal and navigate to your directory and execute the below command, Docker will download the images for Elasticsearch and Kibana and setup your containers. After successful completion you will be able to see your containers running,

How do I get logs from Elasticsearch container?

By default you can access logs with docker logs. If you would prefer the Elasticsearch container to write logs to disk, set the ES_LOG_STYLE environment variable to file . This causes Elasticsearch to use the same logging configuration as other Elasticsearch distribution formats.

How do I set the Max_map_count for Elasticsearch in a docker container?

The vm.max_map_count setting must be set via docker-machine: The vm.max_map_count setting must be set in the docker-desktop container: By default, Elasticsearch runs inside the container as user elasticsearch using uid:gid 1000:0. One exception is Openshift , which runs containers using an arbitrarily assigned user ID.


1 Answers

I recommend using docker-compose (which makes lot of things much easier) with following configuration.

Configuration (for development)

Configuration starts 3 services: elastic itself and extra utilities for development like kibana and head plugin (these could be omitted, if you don't need them).

In the same directory you will need three files:

  • docker-compose.yml
  • elasticsearch.yml
  • kibana.yml

With following contents:

docker-compose.yml

version: '2'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:5.4.0
    container_name: elasticsearch_540
    environment:
      - http.host=0.0.0.0
      - transport.host=0.0.0.0
      - "ES_JAVA_OPTS=-Xms1g -Xmx1g"
    volumes:
      - esdata:/usr/share/elasticsearch/data
      - ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
    ports:
      - 9200:9200
      - 9300:9300
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    mem_limit: 2g
    cap_add:
      - IPC_LOCK
  kibana:
    image: docker.elastic.co/kibana/kibana:5.4.0
    container_name: kibana_540
    environment:
      - SERVER_HOST=0.0.0.0
    volumes:
      - ./kibana.yml:/usr/share/kibana/config/kibana.yml
    ports:
      - 5601:5601
  headPlugin:
    image: mobz/elasticsearch-head:5
    container_name: head_540
    ports:
      - 9100:9100

volumes:
  esdata:
    driver: local

elasticsearch.yml

cluster.name: "chimeo-docker-cluster"
node.name: "chimeo-docker-single-node"
network.host: 0.0.0.0

http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers: "Authorization"

kibana.yml

server.name: kibana
server.host: "0"
elasticsearch.url: http://elasticsearch:9200
elasticsearch.username: elastic
elasticsearch.password: changeme
xpack.monitoring.ui.container.elasticsearch.enabled: true

Running

With above three files in the same directory and that directory set as current working directory you do (could require sudo, depends how you have your docker-compose set up):

docker-compose up

It will start up and you will see logs from three different services: elasticsearch_540, kibana_540 and head_540.

After initial start up you will have your elastic cluster available for http under 9200 and for tcp under 9300. Validate with following curl if the cluster started up:

curl -u elastic:changeme http://localhost:9200/_cat/health 

Then you can view and play with your cluster using either kibana (with credentials elastic / changeme):

http://localhost:5601/

or head plugin:

http://localhost:9100/?base_uri=http://localhost:9200&auth_user=elastic&auth_password=changeme
like image 118
slawek Avatar answered Oct 10 '22 17:10

slawek